2014-01-01
Myoelectric control has been used for decades to control powered upper limb prostheses. Conventional, amplitude-based control has been employed to control a single prosthesis degree of freedom (DOF) such as closing and opening of the hand. Within the last decade, new and advanced arm and hand prostheses have been constructed that are capable of actuating numerous DOFs. Pattern recognition control has been proposed to control a greater number of DOFs than conventional control, but has traditionally been limited to sequentially controlling DOFs one at a time. However, able-bodied individuals use multiple DOFs simultaneously, and it may be beneficial to provide amputees the ability to perform simultaneous movements. In this study, four amputees who had undergone targeted motor reinnervation (TMR) surgery with previous training using myoelectric prostheses were configured to use three control strategies: 1) conventional amplitude-based myoelectric control, 2) sequential (one-DOF) pattern recognition control, 3) simultaneous pattern recognition control. Simultaneous pattern recognition was enabled by having amputees train each simultaneous movement as a separate motion class. For tasks that required control over just one DOF, sequential pattern recognition based control performed the best with the lowest average completion times, completion rates and length error. For tasks that required control over 2 DOFs, the simultaneous pattern recognition controller performed the best with the lowest average completion times, completion rates and length error compared to the other control strategies. In the two strategies in which users could employ simultaneous movements (conventional and simultaneous pattern recognition), amputees chose to use simultaneous movements 78% of the time with simultaneous pattern recognition and 64% of the time with conventional control for tasks that required two DOF motions to reach the target. These results suggest that when amputees are given the ability to control multiple DOFs simultaneously, they choose to perform tasks that utilize multiple DOFs with simultaneous movements. Additionally, they were able to perform these tasks with higher performance (faster speed, lower length error and higher completion rates) without losing substantial performance in 1 DOF tasks. PMID:24410948
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Mechanisms and neural basis of object and pattern recognition: a study with chess experts.
Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang
2010-11-01
Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.
Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task
López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa
2013-01-01
In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436
Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients
Peng, Shu-Chen; Lu, Hui-Ping; Lu, Nelson; Lin, Yung-Song; Deroche, Mickael L. D.
2017-01-01
Purpose The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. Method Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally uttered words that were contrastive in lexical tones. For Task 2, a disyllabic word (yanjing) was manipulated orthogonally, varying in fundamental-frequency (F0) contours and duration patterns. Participants identified each token with the second syllable jing pronounced with Tone 1 (a high level tone) as eyes or with Tone 4 (a high falling tone) as eyeglasses. Results CI participants' recognition accuracy was significantly lower than NH listeners' in Task 1. In Task 2, CI participants' reliance on F0 contours was significantly less than that of NH listeners; their reliance on duration patterns, however, was significantly higher than that of NH listeners. Both CI and NH listeners' performance in Task 1 was significantly correlated with their reliance on F0 contours in Task 2. Conclusion For pediatric CI recipients, lexical-tone recognition using naturally uttered words is primarily related to their reliance on F0 contours, although duration patterns may be used as an additional cue. PMID:28388709
Koelkebeck, Katja; Kohl, Waldemar; Luettgenau, Julia; Triantafillou, Susanna; Ohrmann, Patricia; Satoh, Shinji; Minoshita, Seiko
2015-07-30
A novel emotion recognition task that employs photos of a Japanese mask representing a highly ambiguous stimulus was evaluated. As non-Asians perceive and/or label emotions differently from Asians, we aimed to identify patterns of task-performance in non-Asian healthy volunteers with a view to future patient studies. The Noh mask test was presented to 42 adult German participants. Reaction times and emotion attribution patterns were recorded. To control for emotion identification abilities, a standard emotion recognition task was used among others. Questionnaires assessed personality traits. Finally, results were compared to age- and gender-matched Japanese volunteers. Compared to other tasks, German participants displayed slowest reaction times on the Noh mask test, indicating higher demands of ambiguous emotion recognition. They assigned more positive emotions to the mask than Japanese volunteers, demonstrating culture-dependent emotion identification patterns. As alexithymic and anxious traits were associated with slower reaction times, personality dimensions impacted on performance, as well. We showed an advantage of ambiguous over conventional emotion recognition tasks. Moreover, we determined emotion identification patterns in Western individuals impacted by personality dimensions, suggesting performance differences in clinical samples. Due to its properties, the Noh mask test represents a promising tool in the differential diagnosis of psychiatric disorders, e.g. schizophrenia. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Facial emotion recognition in patients with focal and diffuse axonal injury.
Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita
2017-01-01
Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.
ERIC Educational Resources Information Center
Annett, John
An experienced person, in such tasks as sonar detection and recognition, has a considerable superiority over a machine recognition system in auditory pattern recognition. However, people require extensive exposure to auditory patterns before achieving a high level of performance. In an attempt to discover a method of training people to recognize…
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.
The Costs and Benefits of Testing and Guessing on Recognition Memory
Huff, Mark J.; Balota, David A.; Hutchison, Keith A.
2016-01-01
We examined whether two types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler tasks, particularly when study lists were semantically related. However, both retrieval practice and guessing also generally inflated false recognition for the non-presented critical words. These patterns were found when final recognition was completed during a short delay within the same experimental session (Experiment 1) and following a 24-hr delay (Experiment 2). In Experiment 3, task instructions were presented randomly after each list to determine whether retrieval-practice and guessing effects were influenced by task-expectancy processes. In contrast to Experiments 1 and 2, final recognition following retrieval practice and guessing was equivalent to restudy, suggesting that the observed retrieval-practice and guessing advantages were in part due to preparatory task-based processing during study. PMID:26950490
HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.
Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B
2017-07-01
This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.
Parks, Colleen M
2013-07-01
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Development of detection and recognition of orientation of geometric and real figures.
Stein, N L; Mandler, J M
1975-06-01
Black and white kindergarten and second-grade children were tested for accuracy of detection and recognition of orientation and location changes in pictures of real-world and geometric figures. No differences were found in accuracy of recognition between the 2 kinds of pictures, but patterns of verbalization differed on specific transformations. Although differences in accuracy were found between kindergarten and second grade on an initial recognition task, practice on a matching-to-sample task eliminated differences on a second recognition task. Few ethnic differences were found on accuracy of recognition, but significant differences were found in amount of verbal output on specific transformations. For both groups, mention of orientation changes was markedly reduced when location changes were present.
Image processing and recognition for biological images
Uchida, Seiichi
2013-01-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739
Understanding eye movements in face recognition using hidden Markov models.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2014-09-16
We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.
Image processing and recognition for biological images.
Uchida, Seiichi
2013-05-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.
Time manages interference in visual short-term memory.
Smith, Amy V; McKeown, Denis; Bunce, David
2017-09-01
Emerging evidence suggests that age-related declines in memory may reflect a failure in pattern separation, a process that is believed to reduce the encoding overlap between similar stimulus representations during memory encoding. Indeed, behavioural pattern separation may be indexed by a visual continuous recognition task in which items are presented in sequence and observers report for each whether it is novel, previously viewed (old), or whether it shares features with a previously viewed item (similar). In comparison to young adults, older adults show a decreased pattern separation when the number of items between "old" and "similar" items is increased. Yet the mechanisms of forgetting underpinning this type of recognition task are yet to be explored in a cognitively homogenous group, with careful control over the parameters of the task, including elapsing time (a critical variable in models of forgetting). By extending the inter-item intervals, number of intervening items and overall decay interval, we observed in a young adult sample (N = 35, M age = 19.56 years) that the critical factor governing performance was inter-item interval. We argue that tasks using behavioural continuous recognition to index pattern separation in immediate memory will benefit from generous inter-item spacing, offering protection from inter-item interference.
The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors
NASA Astrophysics Data System (ADS)
Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.;
2017-09-01
The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.
Do pattern recognition skills transfer across sports? A preliminary analysis.
Smeeton, Nicholas J; Ward, Paul; Williams, A Mark
2004-02-01
The ability to recognize patterns of play is fundamental to performance in team sports. While typically assumed to be domain-specific, pattern recognition skills may transfer from one sport to another if similarities exist in the perceptual features and their relations and/or the strategies used to encode and retrieve relevant information. A transfer paradigm was employed to compare skilled and less skilled soccer, field hockey and volleyball players' pattern recognition skills. Participants viewed structured and unstructured action sequences from each sport, half of which were randomly represented with clips not previously seen. The task was to identify previously viewed action sequences quickly and accurately. Transfer of pattern recognition skill was dependent on the participant's skill, sport practised, nature of the task and degree of structure. The skilled soccer and hockey players were quicker than the skilled volleyball players at recognizing structured soccer and hockey action sequences. Performance differences were not observed on the structured volleyball trials between the skilled soccer, field hockey and volleyball players. The skilled field hockey and soccer players were able to transfer perceptual information or strategies between their respective sports. The less skilled participants' results were less clear. Implications for domain-specific expertise, transfer and diversity across domains are discussed.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
Cerami, Chiara; Dodich, Alessandra; Iannaccone, Sandro; Marcone, Alessandra; Lettieri, Giada; Crespi, Chiara; Gianolli, Luigi; Cappa, Stefano F.; Perani, Daniela
2015-01-01
The behavioural variant of frontotemporal dementia (bvFTD) is a rare disease mainly affecting the social brain. FDG-PET fronto-temporal hypometabolism is a supportive feature for the diagnosis. It may also provide specific functional metabolic signatures for altered socio-emotional processing. In this study, we evaluated the emotion recognition and attribution deficits and FDG-PET cerebral metabolic patterns at the group and individual levels in a sample of sporadic bvFTD patients, exploring the cognitive-functional correlations. Seventeen probable mild bvFTD patients (10 male and 7 female; age 67.8±9.9) were administered standardized and validated version of social cognition tasks assessing the recognition of basic emotions and the attribution of emotions and intentions (i.e., Ekman 60-Faces test-Ek60F and Story-based Empathy task-SET). FDG-PET was analysed using an optimized voxel-based SPM method at the single-subject and group levels. Severe deficits of emotion recognition and processing characterized the bvFTD condition. At the group level, metabolic dysfunction in the right amygdala, temporal pole, and middle cingulate cortex was highly correlated to the emotional recognition and attribution performances. At the single-subject level, however, heterogeneous impairments of social cognition tasks emerged, and different metabolic patterns, involving limbic structures and prefrontal cortices, were also observed. The derangement of a right limbic network is associated with altered socio-emotional processing in bvFTD patients, but different hypometabolic FDG-PET patterns and heterogeneous performances on social tasks at an individual level exist. PMID:26513651
A multimodal approach to emotion recognition ability in autism spectrum disorders.
Jones, Catherine R G; Pickles, Andrew; Falcaro, Milena; Marsden, Anita J S; Happé, Francesca; Scott, Sophie K; Sauter, Disa; Tregay, Jenifer; Phillips, Rebecca J; Baird, Gillian; Simonoff, Emily; Charman, Tony
2011-03-01
Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (≥ 80 vs. < 80). We found no evidence of a fundamental emotion recognition deficit in the ASD group and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD. © 2010 The Authors. Journal of Child Psychology and Psychiatry © 2010 Association for Child and Adolescent Mental Health.
The Wireless Ubiquitous Surveillance Testbed
2003-03-01
c. Eye Patterns.............................................................................17 d. Facial Recognition ..................................................................19...27). ...........................................98 Table F.4. Facial Recognition Products. (After: Polemi, p. 25 and BiometriTech, 15 May 2002...it applies to homeland security. C. RESEARCH TASKS The main goals of this thesis are to: • Set up the biometric sensors and facial recognition surveillance
Cultural differences in visual object recognition in 3-year-old children
Kuwabara, Megumi; Smith, Linda B.
2016-01-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576
Cultural differences in visual object recognition in 3-year-old children.
Kuwabara, Megumi; Smith, Linda B
2016-07-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2018-01-01
The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.
An information-processing model of three cortical regions: evidence in episodic memory retrieval.
Sohn, Myeong-Ho; Goode, Adam; Stenger, V Andrew; Jung, Kwan-Jin; Carter, Cameron S; Anderson, John R
2005-03-01
ACT-R (Anderson, J.R., et al., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261) relates the inferior dorso-lateral prefrontal cortex to a retrieval buffer that holds information retrieved from memory and the posterior parietal cortex to an imaginal buffer that holds problem representations. Because the number of changes in a problem representation is not necessarily correlated with retrieval difficulties, it is possible to dissociate prefrontal-parietal activations. In two fMRI experiments, we examined this dissociation using the fan effect paradigm. Experiment 1 compared a recognition task, in which representation requirement remains the same regardless of retrieval difficulty, with a recall task, in which both representation and retrieval loads increase with retrieval difficulty. In the recognition task, the prefrontal activation revealed a fan effect but not the parietal activation. In the recall task, both regions revealed fan effects. In Experiment 2, we compared visually presented stimuli and aurally presented stimuli using the recognition task. While only the prefrontal region revealed the fan effect, the activation patterns in the prefrontal and the parietal region did not differ by stimulus presentation modality. In general, these results provide support for the prefrontal-parietal dissociation in terms of retrieval and representation and the modality-independent nature of the information processed by these regions. Using ACT-R, we also provide computational models that explain patterns of fMRI responses in these two areas during recognition and recall.
Bastin, Christine; Van der Linden, Martial
2003-01-01
Whether the format of a recognition memory task influences the contribution of recollection and familiarity to performance is a matter of debate. The authors investigated this issue by comparing the performance of 64 young (mean age = 21.7 years; mean education = 14.5 years) and 62 older participants (mean age = 64.4 years; mean education = 14.2 years) on a yes-no and a forced-choice recognition task for unfamiliar faces using the remember-know-guess procedure. Familiarity contributed more to forced-choice than to yes-no performance. Moreover, older participants, who showed a decrease in recollection together with an increase in familiarity, performed better on the forced-choice task than on the yes-no task, whereas younger participants showed the opposite pattern.
The Effects of Aging and IQ on Item and Associative Memory
Ratcliff, Roger; Thapar, Anjali; McKoon, Gail
2011-01-01
The effects of aging and IQ on performance were examined in four memory tasks: item recognition, associative recognition, cued recall, and free recall. For item and associative recognition, accuracy and the response time distributions for correct and error responses were explained by Ratcliff’s (1978) diffusion model, at the level of individual participants. The values of the components of processing identified by the model for the recognition tasks, as well as accuracy for cued and free recall, were compared across levels of IQ ranging from 85 to 140 and age (college-age, 60-74 year olds, and 75-90 year olds). IQ had large effects on the quality of the evidence from memory on which decisions were based in the recognition tasks and accuracy in the recall tasks, except for the oldest participants for whom some of the measures were near floor values. Drift rates in the recognition tasks, accuracy in the recall tasks, and IQ all correlated strongly with each other. However, there was a small decline in drift rates for item recognition and a large decline for associative recognition and accuracy in cued recall (about 70 percent). In contrast, there were large age effects on boundary separation and nondecision time (which correlated across tasks), but little effect of IQ. The implications of these results for single- and dual- process models of item recognition are discussed and it is concluded that models that deal with both RTs and accuracy are subject to many more constraints than models that deal with only one of these measures. Overall, the results of the study show a complicated but interpretable pattern of interactions that present important targets for response time and memory models. PMID:21707207
Quantum pattern recognition with multi-neuron interactions
NASA Astrophysics Data System (ADS)
Fard, E. Rezaei; Aghayar, K.; Amniat-Talab, M.
2018-03-01
We present a quantum neural network with multi-neuron interactions for pattern recognition tasks by a combination of extended classic Hopfield network and adiabatic quantum computation. This scheme can be used as an associative memory to retrieve partial patterns with any number of unknown bits. Also, we propose a preprocessing approach to classifying the pattern space S to suppress spurious patterns. The results of pattern clustering show that for pattern association, the number of weights (η ) should equal the numbers of unknown bits in the input pattern ( d). It is also remarkable that associative memory function depends on the location of unknown bits apart from the d and load parameter α.
NASA Astrophysics Data System (ADS)
Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.
2018-05-01
Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.
NASA Astrophysics Data System (ADS)
Zhou, Zheng; Liu, Chen; Shen, Wensheng; Dong, Zhen; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2017-04-01
A binary spike-time-dependent plasticity (STDP) protocol based on one resistive-switching random access memory (RRAM) device was proposed and experimentally demonstrated in the fabricated RRAM array. Based on the STDP protocol, a novel unsupervised online pattern recognition system including RRAM synapses and CMOS neurons is developed. Our simulations show that the system can efficiently compete the handwritten digits recognition task, which indicates the feasibility of using the RRAM-based binary STDP protocol in neuromorphic computing systems to obtain good performance.
Memory Asymmetry of Forward and Backward Associations in Recognition Tasks
Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han
2013-01-01
There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; An, R.
The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less
Acciarri, R.; Adams, C.; An, R.; ...
2018-01-29
The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less
Willams, A Mark; Hodges, Nicola J; North, Jamie S; Barton, Gabor
2006-01-01
The perceptual-cognitive information used to support pattern-recognition skill in soccer was examined. In experiment 1, skilled players were quicker and more accurate than less-skilled players at recognising familiar and unfamiliar soccer action sequences presented on film. In experiment 2, these action sequences were converted into point-light displays, with superficial display features removed and the positions of players and the relational information between them made more salient. Skilled players were more accurate than less-skilled players in recognising sequences presented in point-light form, implying that each pattern of play can be defined by the unique relations between players. In experiment 3, various offensive and defensive players were occluded for the duration of each trial in an attempt to identify the most important sources of information underpinning successful performance. A decrease in response accuracy was observed under occluded compared with non-occluded conditions and the expertise effect was no longer observed. The relational information between certain key players, team-mates and their defensive counterparts may provide the essential information for effective pattern-recognition skill in soccer. Structural feature analysis, temporal phase relations, and knowledge-based information are effectively integrated to facilitate pattern recognition in dynamic sport tasks.
Mind wandering in text comprehension under dual-task conditions.
Dixon, Peter; Li, Henry
2013-01-01
In two experiments, subjects responded to on-task probes while reading under dual-task conditions. The secondary task was to monitor the text for occurrences of the letter e. In Experiment 1, reading comprehension was assessed with a multiple-choice recognition test; in Experiment 2, subjects recalled the text. In both experiments, the secondary task replicated the well-known "missing-letter effect" in which detection of e's was less effective for function words and the word "the." Letter detection was also more effective when subjects were on task, but this effect did not interact with the missing-letter effect. Comprehension was assessed in both the dual-task conditions and in control single-task conditions. In the single-task conditions, both recognition (Experiment 1) and recall (Experiment 2) was better when subjects were on task, replicating previous research on mind wandering. Surprisingly, though, comprehension under dual-task conditions only showed an effect of being on task when measured with recall; there was no effect on recognition performance. Our interpretation of this pattern of results is that subjects generate responses to on-task probes on the basis of a retrospective assessment of the contents of working memory. Further, we argue that under dual-task conditions, the contents of working memory is not closely related to the reading processes required for accurate recognition performance. These conclusions have implications for models of text comprehension and for the interpretation of on-task probe responses.
Mind wandering in text comprehension under dual-task conditions
Dixon, Peter; Li, Henry
2013-01-01
In two experiments, subjects responded to on-task probes while reading under dual-task conditions. The secondary task was to monitor the text for occurrences of the letter e. In Experiment 1, reading comprehension was assessed with a multiple-choice recognition test; in Experiment 2, subjects recalled the text. In both experiments, the secondary task replicated the well-known “missing-letter effect” in which detection of e's was less effective for function words and the word “the.” Letter detection was also more effective when subjects were on task, but this effect did not interact with the missing-letter effect. Comprehension was assessed in both the dual-task conditions and in control single-task conditions. In the single-task conditions, both recognition (Experiment 1) and recall (Experiment 2) was better when subjects were on task, replicating previous research on mind wandering. Surprisingly, though, comprehension under dual-task conditions only showed an effect of being on task when measured with recall; there was no effect on recognition performance. Our interpretation of this pattern of results is that subjects generate responses to on-task probes on the basis of a retrospective assessment of the contents of working memory. Further, we argue that under dual-task conditions, the contents of working memory is not closely related to the reading processes required for accurate recognition performance. These conclusions have implications for models of text comprehension and for the interpretation of on-task probe responses. PMID:24101909
Kesner, Raymond P; Kirk, Ryan A; Yu, Zhenghui; Polansky, Caitlin; Musso, Nick D
2016-03-01
In order to examine the role of the dorsal dentate gyrus (dDG) in slope (vertical space) recognition and possible pattern separation, various slope (vertical space) degrees were used in a novel exploratory paradigm to measure novelty detection for changes in slope (vertical space) recognition memory and slope memory pattern separation in Experiment 1. The results of the experiment indicate that control rats displayed a slope recognition memory function with a pattern separation process for slope memory that is dependent upon the magnitude of change in slope between study and test phases. In contrast, the dDG lesioned rats displayed an impairment in slope recognition memory, though because there was no significant interaction between the two groups and slope memory, a reliable pattern separation impairment for slope could not be firmly established in the DG lesioned rats. In Experiment 2, in order to determine whether, the dDG plays a role in shades of grey spatial context recognition and possible pattern separation, shades of grey were used in a novel exploratory paradigm to measure novelty detection for changes in the shades of grey context environment. The results of the experiment indicate that control rats displayed a shades of grey-context pattern separation effect across levels of separation of context (shades of grey). In contrast, the DG lesioned rats displayed a significant interaction between the two groups and levels of shades of grey suggesting impairment in a pattern separation function for levels of shades of grey. In Experiment 3 in order to determine whether the dorsal CA3 (dCA3) plays a role in object pattern completion, a new task requiring less training and using a choice that was based on choosing the correct set of objects on a two-choice discrimination task was used. The results indicated that control rats displayed a pattern completion function based on the availability of one, two, three or four cues. In contrast, the dCA3 lesioned rats displayed a significant interaction between the two groups and the number of available objects suggesting impairment in a pattern completion function for object cues. Copyright © 2015 Elsevier Inc. All rights reserved.
Attentional biases and memory for emotional stimuli in men and male rhesus monkeys.
Lacreuse, Agnès; Schatz, Kelly; Strazzullo, Sarah; King, Hanna M; Ready, Rebecca
2013-11-01
We examined attentional biases for social and non-social emotional stimuli in young adult men and compared the results to those of male rhesus monkeys (Macaca mulatta) previously tested in a similar dot-probe task (King et al. in Psychoneuroendocrinology 37(3):396-409, 2012). Recognition memory for these stimuli was also analyzed in each species, using a recognition memory task in humans and a delayed non-matching-to-sample task in monkeys. We found that both humans and monkeys displayed a similar pattern of attentional biases toward threatening facial expressions of conspecifics. The bias was significant in monkeys and of marginal significance in humans. In addition, humans, but not monkeys, exhibited an attentional bias away from negative non-social images. Attentional biases for social and non-social threat differed significantly, with both species showing a pattern of vigilance toward negative social images and avoidance of negative non-social images. Positive stimuli did not elicit significant attentional biases for either species. In humans, emotional content facilitated the recognition of non-social images, but no effect of emotion was found for the recognition of social images. Recognition accuracy was not affected by emotion in monkeys, but response times were faster for negative relative to positive images. Altogether, these results suggest shared mechanisms of social attention in humans and monkeys, with both species showing a pattern of selective attention toward threatening faces of conspecifics. These data are consistent with the view that selective vigilance to social threat is the result of evolutionary constraints. Yet, selective attention to threat was weaker in humans than in monkeys, suggesting that regulatory mechanisms enable non-anxious humans to reduce sensitivity to social threat in this paradigm, likely through enhanced prefrontal control and reduced amygdala activation. In addition, the findings emphasize important differences in attentional biases to social versus non-social threat in both species. Differences in the impact of emotional stimuli on recognition memory between monkeys and humans will require further study, as methodological differences in the recognition tasks may have affected the results.
The Boundaries of Hemispheric Processing in Visual Pattern Recognition
1989-11-01
Allen, M. W. (1968). Impairment in facial recognition in patients cerebral disease. Cortex, 4, 344-358. Bogen, J. E. (1969). The other side of the brain...effects on a facial recognition task in normal subjects. Cortex, 9, 246-258. tliscock, M. (1988). Behavioral asymmetries in normal children. In D. L... facial recognition . Neuropsychologia, 22, 471-477. Ross-Kossak, P., & Turkewitz, G. (1986). A micro and macro developmental view of the nature of changes
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941
Bennetts, Rachel J; Mole, Joseph; Bate, Sarah
2017-09-01
Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.
O'Connor, Akira R; Moulin, Chris J A
2013-01-01
Recent neuropsychological and neuroscientific research suggests that people who experience more déjà vu display characteristic patterns in normal recognition memory. We conducted a large individual differences study (n = 206) to test these predictions using recollection and familiarity parameters recovered from a standard memory task. Participants reported déjà vu frequency and a number of its correlates, and completed a recognition memory task analogous to a Remember-Know procedure. The individual difference measures replicated an established correlation between déjà vu frequency and frequency of travel, and recognition performance showed well-established word frequency and accuracy effects. Contrary to predictions, no relationships were found between déjà vu frequency and recollection or familiarity memory parameters from the recognition test. We suggest that déjà vu in the healthy population reflects a mismatch between errant memory signaling and memory monitoring processes not easily characterized by standard recognition memory task performance.
O’Connor, Akira R.; Moulin, Chris J. A.
2013-01-01
Recent neuropsychological and neuroscientific research suggests that people who experience more déjà vu display characteristic patterns in normal recognition memory. We conducted a large individual differences study (n = 206) to test these predictions using recollection and familiarity parameters recovered from a standard memory task. Participants reported déjà vu frequency and a number of its correlates, and completed a recognition memory task analogous to a Remember-Know procedure. The individual difference measures replicated an established correlation between déjà vu frequency and frequency of travel, and recognition performance showed well-established word frequency and accuracy effects. Contrary to predictions, no relationships were found between déjà vu frequency and recollection or familiarity memory parameters from the recognition test. We suggest that déjà vu in the healthy population reflects a mismatch between errant memory signaling and memory monitoring processes not easily characterized by standard recognition memory task performance. PMID:24409159
O'Neil, Edward B; Watson, Hilary C; Dhillon, Sonya; Lobaugh, Nancy J; Lee, Andy C H
2015-09-01
Recent work has demonstrated that the perirhinal cortex (PRC) supports conjunctive object representations that aid object recognition memory following visual object interference. It is unclear, however, how these representations interact with other brain regions implicated in mnemonic retrieval and how congruent and incongruent interference influences the processing of targets and foils during object recognition. To address this, multivariate partial least squares was applied to fMRI data acquired during an interference match-to-sample task, in which participants made object or scene recognition judgments after object or scene interference. This revealed a pattern of activity sensitive to object recognition following congruent (i.e., object) interference that included PRC, prefrontal, and parietal regions. Moreover, functional connectivity analysis revealed a common pattern of PRC connectivity across interference and recognition conditions. Examination of eye movements during the same task in a separate study revealed that participants gazed more at targets than foils during correct object recognition decisions, regardless of interference congruency. By contrast, participants viewed foils more than targets for incorrect object memory judgments, but only after congruent interference. Our findings suggest that congruent interference makes object foils appear familiar and that a network of regions, including PRC, is recruited to overcome the effects of interference.
Maintenance of youth-like processing protects against false memory in later adulthood.
Fandakova, Yana; Lindenberger, Ulman; Shing, Yee Lee
2015-02-01
Normal cognitive aging compromises the ability to form and retrieve associations among features of a memory episode. One indicator of this age-related deficit is older adults' difficulty in detecting and correctly rejecting new associations of familiar items. Comparing 28 younger and 30 older adults on a continuous recognition task with word pairs, we found that older adults whose activation patterns deviate less from the average pattern of younger adults while detecting repaired associations show the following: (1) higher overall memory and fewer false recognitions; (2) stronger functional connectivity of prefrontal regions with middle temporal and parahippocampal gyrus; and (3) higher recall and strategic categorical clustering in an independently assessed free recall task. Deviations from the average young-adult network reflected underactivation of frontoparietal regions instead of overactivation of regions not activated by younger adults. We conclude that maintenance of youth-like task-relevant activation patterns is critical for preserving memory functions in later adulthood. Copyright © 2015 Elsevier Inc. All rights reserved.
Repeating Patterns in Kindergarten: Findings from Children's Enactments of Two Activities
ERIC Educational Resources Information Center
Tsamir, Pessia; Tirosh, Dina; Levenson, Esther S.; Barkai, Ruthi; Tabach, Michal
2017-01-01
This paper describes kindergarten children's engagement with two patterning activities. The first activity includes two tasks in which children are asked to choose possible ways for extending two different repeating patterns and the second activity calls for comparing different pairs of repeating patterns. Children's recognition of the unit of…
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.
Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre
2017-06-01
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Dissociation between facial and bodily expressions in emotion recognition: A case study.
Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo
2017-12-21
Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.
30 CFR 46.5 - New miner training.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and loose or... aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition training specific...
30 CFR 46.5 - New miner training.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and loose or... aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition training specific...
Pattern Perception and Pictures for the Blind
ERIC Educational Resources Information Center
Heller, Morton A.; McCarthy, Melissa; Clark, Ashley
2005-01-01
This article reviews recent research on perception of tangible pictures in sighted and blind people. Haptic picture naming accuracy is dependent upon familiarity and access to semantic memory, just as in visual recognition. Performance is high when haptic picture recognition tasks do not depend upon semantic memory. Viewpoint matters for the ease…
Eye movements during object recognition in visual agnosia.
Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe
2012-07-01
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.
Running Improves Pattern Separation during Novel Object Recognition.
Bolz, Leoni; Heigele, Stefanie; Bischofberger, Josef
2015-10-09
Running increases adult neurogenesis and improves pattern separation in various memory tasks including context fear conditioning or touch-screen based spatial learning. However, it is unknown whether pattern separation is improved in spontaneous behavior, not emotionally biased by positive or negative reinforcement. Here we investigated the effect of voluntary running on pattern separation during novel object recognition in mice using relatively similar or substantially different objects.We show that running increases hippocampal neurogenesis but does not affect object recognition memory with 1.5 h delay after sample phase. By contrast, at 24 h delay, running significantly improves recognition memory for similar objects, whereas highly different objects can be distinguished by both, running and sedentary mice. These data show that physical exercise improves pattern separation, independent of negative or positive reinforcement. In sedentary mice there is a pronounced temporal gradient for remembering object details. In running mice, however, increased neurogenesis improves hippocampal coding and temporally preserves distinction of novel objects from familiar ones.
Collected Notes on the Workshop for Pattern Discovery in Large Databases
NASA Technical Reports Server (NTRS)
Buntine, Wray (Editor); Delalto, Martha (Editor)
1991-01-01
These collected notes are a record of material presented at the Workshop. The core data analysis is addressed that have traditionally required statistical or pattern recognition techniques. Some of the core tasks include classification, discrimination, clustering, supervised and unsupervised learning, discovery and diagnosis, i.e., general pattern discovery.
Visual Object Pattern Separation Varies in Older Adults
ERIC Educational Resources Information Center
Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.
2013-01-01
Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
2014-12-01
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.
Unique and Persistent Individual Patterns of Brain Activity Across Different Memory Retrieval Tasks
Miller, Michael B.; Donovan, Christa-Lynn; Van Horn, John D.; German, Elaine; Sokol-Hessner, Peter; Wolford, George L.
2009-01-01
Fourteen subjects were scanned in two fMRI sessions separated by several months. During each session, subjects performed an episodic retrieval task, a semantic retrieval task, and a working memory task. We found that 1) despite extensive intersubject variability in the pattern of activity across the whole brain, individual activity patterns were stable over time, 2) activity patterns of the same individual performing different tasks were more similar than activity patterns of different individuals performing the same task, and 3) that individual differences in decision criterion on a recognition test predicted the degree of similarity between any two individuals’ patterns of brain activity, but individual differences in memory accuracy or similarity in structural anatomy did not. These results imply that the exclusive use of group maps may be ineffective in profiling the pattern of activations for a given task. This may be particularly true for a task like episodic retrieval, which is relatively strategic and can involve widely-distributed specialized processes that are peripheral to the actual retrieval of stored information. Further, these processes may be differentially engaged depending on individual differences in cognitive processing and/or physiology. PMID:19540922
Semantic Neighborhood Effects for Abstract versus Concrete Words
Danguecan, Ashley N.; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422
Semantic Neighborhood Effects for Abstract versus Concrete Words.
Danguecan, Ashley N; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.
Sign Perception and Recognition in Non-Native Signers of ASL
Morford, Jill P.; Carlson, Martina L.
2011-01-01
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition. PMID:21686080
Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah
2016-09-01
Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.
Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka
2014-01-01
Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.
Training and cockpit design to promote expert performance
NASA Technical Reports Server (NTRS)
Chappell, Sheryl L.
1991-01-01
The behavior of expert pilots in familiar situations is explored and the implications for better training programs and cockpit designs are stated. Experts in familiar operational situations performing highly practiced tasks are said to recognize and respond to complex situations using pattern recognition or intuition. For some tasks this class of behaviors is desirable; performance can be improved by reducing cognitive load and increasing speed and accuracy. Part-task training, training for monitoring and techniques for the transfer of knowledge can facilitate the development of these skills. Methods for promoting pattern recognition through pilot-aircraft interface design include the use of spatial presentations of information and providing triggering events. In some instances, the familiar, well-practiced behavior is not appropriate and it is desirable to prevent the response. When prevention is necessary, barriers can be constructed in the interface to remind the pilot of the inappropriateness of the response.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
30 CFR 46.6 - Newly hired experienced miner training.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Instruction on the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and... health and safety aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition...
30 CFR 46.6 - Newly hired experienced miner training.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Instruction on the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and... health and safety aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition...
Lexical and Metrical Stress in Word Recognition: Lexical or Pre-Lexical Influences?
ERIC Educational Resources Information Center
Slowiaczek, Louisa M.; Soltano, Emily G.; Bernstein, Hilary L.
2006-01-01
The influence of lexical stress and/or metrical stress on spoken word recognition was examined. Two experiments were designed to determine whether response times in lexical decision or shadowing tasks are influenced when primes and targets share lexical stress patterns (JUVenile-BIBlical [Syllables printed in capital letters indicate those…
Auditory perception of a human walker.
Cottrell, David; Campbell, Megan E J
2014-01-01
When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.
Neural correlates of auditory recognition memory in the primate dorsal temporal pole
Ng, Chi-Wing; Plakke, Bethany
2013-01-01
Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324
Discovering the Sequential Structure of Thought
ERIC Educational Resources Information Center
Anderson, John R.; Fincham, Jon M.
2014-01-01
Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel…
The effect of inversion on face recognition in adults with autism spectrum disorder.
Hedley, Darren; Brewer, Neil; Young, Robyn
2015-05-01
Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.
Comparing the visual spans for faces and letters
He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.
2015-01-01
The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858
Neural network-based system for pattern recognition through a fiber optic bundle
NASA Astrophysics Data System (ADS)
Gamo-Aranda, Javier; Rodriguez-Horche, Paloma; Merchan-Palacios, Miguel; Rosales-Herrera, Pablo; Rodriguez, M.
2001-04-01
A neural network based system to identify images transmitted through a Coherent Fiber-optic Bundle (CFB) is presented. Patterns are generated in a computer, displayed on a Spatial Light Modulator, imaged onto the input face of the CFB, and recovered optically by a CCD sensor array for further processing. Input and output optical subsystems were designed and used to that end. The recognition step of the transmitted patterns is made by a powerful, widely-used, neural network simulator running on the control PC. A complete PC-based interface was developed to control the different tasks involved in the system. An optical analysis of the system capabilities was carried out prior to performing the recognition step. Several neural network topologies were tested, and the corresponding numerical results are also presented and discussed.
Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.
Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro
2011-12-01
The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.
Effective connectivity of visual word recognition and homophone orthographic errors
Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve
2015-01-01
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070
Patterns of False Memory in Patients with Huntington's Disease.
Chen, I-Wen; Chen, Chiung-Mei; Wu, Yih-Ru; Hua, Mau-Sun
2017-06-01
Increased false memory recognition in patients with Huntington's disease (HD) has been widely reported; however, the underlying memory constructive processes remain unclear. The present study explored gist memory, item-specific memory, and monitoring ability in patients with HD. Twenty-five patients (including 13 patients with mild HD and 12 patients with moderate-to-severe HD) and 30 healthy comparison participants (HC) were recruited. We used the Deese-Roediger-McDermott (DRM) paradigm to investigate participants' false recognition patterns, along with neuropsychological tests to assess general cognitive function. Both mild and moderate-to-severe patients with HD showed significant executive functioning and episodic memory impairment. On the DRM tasks, both HD patient groups showed significantly impaired performance in tasks assessing unrelated false recognition and item-specific memory as compared to the HC group; moderate-to-severe patients performed more poorly than mild patients did. Only moderate-severe patients exhibited significantly poorer related false recognition index scores than HCs in the verbal DRM task; performance of HD patient groups was comparable to the HC group on the pictorial DRM task. It appears that diminished verbatim memory and monitoring ability are early signs of cognitive decline during the HD course. Conversely, gist memory is relatively robust, with only partial decline during advanced-stage HD. Our findings suggest that medial temporal lobe function is relatively preserved compared to that of frontal-related structures in early HD. Thus, gist-based memory rehabilitation programs might be beneficial for patients with HD. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Training Letter and Orthographic Pattern Recognition in Children with Slow Naming Speed
ERIC Educational Resources Information Center
Conrad, Nicole J.; Levy, Betty Ann
2011-01-01
Although research has established that performance on a rapid automatized naming (RAN) task is related to reading, the nature of this relationship is unclear. Bowers (2001) proposed that processes underlying performance on the RAN task and orthographic knowledge make independent and additive contributions to reading performance. We examined the…
Escalating dose, multiple binge methamphetamine regimen does not impair recognition memory in rats.
Clark, Robert E; Kuczenski, Ronald; Segal, David S
2007-07-01
Rats exposed to methamphetamine (METH) in an acute high dose "binge" pattern have been reported to exhibit a persistent deficit in a novel object recognition (NOR) task, which may suggest a potential risk for human METH abusers. However, most high dose METH abusers initially use lower doses before progressively increasing the dose, only eventually engaging in multiple daily administrations. To simulate this pattern of METH exposure, we administered progressively increasing doses of METH to rats over a 14 day interval, then treated them with daily METH binges for 11 days. This treatment resulted in a persistent deficit in striatal dopamine (DA) levels of approximately 20%. We then tested them in a NOR task under a variety of conditions. We could not detect a deficit in their performance in the NOR task under any of the testing conditions. These results suggest that mechanisms other than or additional to the decrement in striatal DA associated with an acute METH binge are responsible for the deficit in the NOR task, and that neuroadaptations consequential to prolonged escalating dose METH pretreatment mitigate against these mechanisms.
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Variability in the impairment of recognition memory in patients with frontal lobe lesions.
Bastin, Christine; Van der Linden, Martial; Lekeu, Françoise; Andrés, Pilar; Salmon, Eric
2006-10-01
Fourteen patients with frontal lobe lesions and 14 normal subjects were tested on a recognition memory task that required discriminating between target words, new words that are synonyms of the targets and unrelated distractors. A deficit was found in 12 of the patients. Moreover, three different patterns of recognition impairment were identified: (I) poor memory for targets, (II) normal hits but increased false recognitions for both types of distractors, (III) normal hit rates, but increased false recognitions for synonyms only. Differences in terms of location of the damage and behavioral characteristics between these subgroups were examined. An encoding deficit was proposed to explain the performance of patients in subgroup I. The behavioral patterns of the patients in subgroups II and III could be interpreted as deficient post-retrieval verification processes and an inability to recollect item-specific information, respectively.
NASA Astrophysics Data System (ADS)
Wade, Alex Robert; Fitzke, Frederick W.
1998-08-01
We describe an image processing system which we have developed to align autofluorescence and high-magnification images taken with a laser scanning ophthalmoscope. The low signal to noise ratio of these images makes pattern recognition a non-trivial task. However, once n images are aligned and averaged, the noise levels drop by a factor of n and the image quality is improved. We include examples of autofluorescence images and images of the cone photoreceptor mosaic obtained using this system.
Habeck, Christian; Rakitin, Brian; Steffener, Jason; Stern, Yaakov
2012-01-01
We performed a delayed-item-recognition task to investigate the neural substrates of non-verbal visual working memory with event-related fMRI (‘Shape task’). 25 young subjects (mean age: 24.0 years; STD=3.8 years) were instructed to study a list of either 1,2 or 3 unnamable nonsense line drawings for 3 seconds (‘stimulus phase’ or STIM). Subsequently, the screen went blank for 7 seconds (‘retention phase’ or RET), and then displayed a probe stimulus for 3 seconds in which subject indicated with a differential button press whether the probe was contained in the studied shape-array or not (‘probe phase’ or PROBE). Ordinal Trend Canonical Variates Analysis (Habeck et al., 2005a) was performed to identify spatial covariance patterns that showed a monotonic increase in expression with memory load during all task phases. Reliable load-related patterns were identified in the stimulus and retention phase (p<0.01), while no significant pattern could be discerned during the probe phase. Spatial covariance patterns that were obtained from an earlier version of this task (Habeck et al., 2005b) using 1, 3, or 6 letters (‘Letter task’) were also prospectively applied to their corresponding task phases in the current non-verbal task version. Interestingly, subject expression of covariance patterns from both verbal and non-verbal retention phases correlated positively in the non-verbal task for all memory loads (p<0.0001). Both patterns also involved similar frontoparietal brain regions that were increasing in activity with memory load, and mediofrontal and temporal regions that were decreasing. Mean subject expression of both patterns across memory load during retention also correlated positively with recognition accuracy (dL) in the Shape task (p<0.005). These findings point to similarities in the neural substrates of verbal and non-verbal rehearsal processes. Encoding processes, on the other hand, are critically dependent on the to-be-remembered material, and seem to necessitate material-specific neural substrates. PMID:22652306
Visual Word Recognition Across the Adult Lifespan
Cohen-Shikora, Emily R.; Balota, David A.
2016-01-01
The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629
An Evaluation and Comparison of Several Measures of Image Quality for Television Displays
1979-01-01
vehicles, buildings, or faces , or they may be artificial much as trn-bar patterns, rectangles, or sine waves. The typical objective image quality assessment...Snyder (1974b) wac able to obtain very good correlations with reaction time and correct responses for a face recognition task. Display quality was varied...recognition versus log JUDA for the target recognition study of Chapter 4, 5) graph of angle oubtended by target at recognitio , versus log JNDA for the
Huo, Guanying
2017-01-01
As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614
Patterns of Attachment and Emotional Competence in Middle Childhood
ERIC Educational Resources Information Center
Colle, Livia; Del Giudice, Marco
2011-01-01
The study investigated the relationship between patterns of attachment and emotional competence at the beginning of middle childhood in a sample of 122 seven-year-olds. A new battery of tasks was developed in order to assess two facets of emotional competence (emotion recognition and knowledge of regulation strategies). Attachment was related to…
A shared representation of order between encoding and recognition in visual short-term memory.
Kalm, Kristjan; Norris, Dennis
2017-07-15
Many complex tasks require people to bind individual events into a sequence that can be held in short term memory (STM). For this purpose information about the order of the individual events in the sequence needs to be maintained in an active and accessible form in STM over a period of few seconds. Here we investigated how the temporal order information is shared between the presentation and response phases of an STM task. We trained a classification algorithm on the fMRI activity patterns from the presentation phase of the STM task to predict the order of the items during the subsequent recognition phase. While voxels in a number of brain regions represented positional information during either presentation and recognition phases, only voxels in the lateral prefrontal cortex (PFC) and the anterior temporal lobe (ATL) represented position consistently across task phases. A shared positional code in the ATL might reflect verbal recoding of visual sequences to facilitate the maintenance of order information over several seconds. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Complex Event Recognition Architecture
NASA Technical Reports Server (NTRS)
Fitzgerald, William A.; Firby, R. James
2009-01-01
Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.
Leal, Stephanie L; Noche, Jessica A; Murray, Elizabeth A; Yassa, Michael A
2017-01-01
While aging is generally associated with episodic memory decline, not all older adults exhibit memory loss. Furthermore, emotional memories are not subject to the same extent of forgetting and appear preserved in aging. We conducted high-resolution fMRI during a task involving pattern separation of emotional information in older adults with and without age-related memory impairment (characterized by performance on a word-list learning task: low performers: LP vs. high performers: HP). We found signals consistent with emotional pattern separation in hippocampal dentate (DG)/CA3 in HP but not in LP individuals, suggesting a deficit in emotional pattern separation. During false recognition, we found increased DG/CA3 activity in LP individuals, suggesting that hyperactivity may be associated with overgeneralization. We additionally observed a selective deficit in basolateral amygdala-lateral entorhinal cortex-DG/CA3 functional connectivity in LP individuals during pattern separation of negative information. During negative false recognition, LP individuals showed increased medial temporal lobe functional connectivity, consistent with overgeneralization. Overall, these results suggest a novel mechanistic account of individual differences in emotional memory alterations exhibited in aging. Copyright © 2016 Elsevier Inc. All rights reserved.
Leal, Stephanie L.; Noche, Jessica A.; Murray, Elizabeth A.; Yassa, Michael A.
2018-01-01
While aging is generally associated with episodic memory decline, not all older adults exhibit memory loss. Furthermore, emotional memories are not subject to the same extent of forgetting and appear preserved in aging. We conducted high-resolution fMRI during a task involving pattern separation of emotional information in older adults with and without age-related memory impairment (characterized by performance on a word-list learning task: low performers: LP vs. high performers: HP). We found signals consistent with emotional pattern separation in hippocampal dentate (DG)/CA3 in HP but not in LP individuals, suggesting a deficit in emotional pattern separation. During false recognition, we found increased DG/CA3 activity in LP individuals, suggesting that hyperactivity may be associated with overgeneralization. We additionally observed a selective deficit in basolateral amygdala—lateral entorhinal cortex—DG/CA3 functional connectivity in LP individuals during pattern separation of negative information. During negative false recognition, LP individuals showed increased medial temporal lobe functional connectivity, consistent with overgeneralization. Overall, these results suggest a novel mechanistic account of individual differences in emotional memory alterations exhibited in aging. PMID:27723500
Arizpe, Joseph; Kravitz, Dwight J.; Yovel, Galit; Baker, Chris I.
2012-01-01
Fixation patterns are thought to reflect cognitive processing and, thus, index the most informative stimulus features for task performance. During face recognition, initial fixations to the center of the nose have been taken to indicate this location is optimal for information extraction. However, the use of fixations as a marker for information use rests on the assumption that fixation patterns are predominantly determined by stimulus and task, despite the fact that fixations are also influenced by visuo-motor factors. Here, we tested the effect of starting position on fixation patterns during a face recognition task with upright and inverted faces. While we observed differences in fixations between upright and inverted faces, likely reflecting differences in cognitive processing, there was also a strong effect of start position. Over the first five saccades, fixation patterns across start positions were only coarsely similar, with most fixations around the eyes. Importantly, however, the precise fixation pattern was highly dependent on start position with a strong tendency toward facial features furthest from the start position. For example, the often-reported tendency toward the left over right eye was reversed for the left starting position. Further, delayed initial saccades for central versus peripheral start positions suggest greater information processing prior to the initial saccade, highlighting the experimental bias introduced by the commonly used center start position. Finally, the precise effect of face inversion on fixation patterns was also dependent on start position. These results demonstrate the importance of a non-stimulus, non-task factor in determining fixation patterns. The patterns observed likely reflect a complex combination of visuo-motor effects and simple sampling strategies as well as cognitive factors. These different factors are very difficult to tease apart and therefore great caution must be applied when interpreting absolute fixation locations as indicative of information use, particularly at a fine spatial scale. PMID:22319606
1999-05-26
Looking for a faster computer? How about an optical computer that processes data streams simultaneously and works with the speed of light? In space, NASA researchers have formed optical thin-film. By turning these thin-films into very fast optical computer components, scientists could improve computer tasks, such as pattern recognition. Dr. Hossin Abdeldayem, physicist at NASA/Marshall Space Flight Center (MSFC) in Huntsville, Al, is working with lasers as part of an optical system for pattern recognition. These systems can be used for automated fingerprinting, photographic scarning and the development of sophisticated artificial intelligence systems that can learn and evolve. Photo credit: NASA/Marshall Space Flight Center (MSFC)
A steady state visually evoked potential investigation of memory and ageing.
Macpherson, Helen; Pipingas, Andrew; Silberstein, Richard
2009-04-01
Old age is generally accompanied by a decline in memory performance. Specifically, neuroimaging and electrophysiological studies have revealed that there are age-related changes in the neural correlates of episodic and working memory. This study investigated age-associated changes in the steady state visually evoked potential (SSVEP) amplitude and latency associated with memory performance. Participants were 15 older (59-67 years) and 14 younger (20-30 years) adults who performed an object working memory (OWM) task and a contextual recognition memory (CRM) task, whilst the SSVEP was recorded from 64 electrode sites. Retention of a single object in the low demand OWM task was characterised by smaller frontal SSVEP amplitude and latency differences in older adults than in younger adults, indicative of an age-associated reduction in neural processes. Recognition of visual images in the more difficult CRM task was accompanied by larger, more sustained SSVEP amplitude and latency decreases over temporal parietal regions in older adults. In contrast, the more transient, frontally mediated pattern of activity demonstrated by younger adults suggests that younger and older adults utilize different neural resources to perform recognition judgements. The results provide support for compensatory processes in the aging brain; at lower task demands, older adults demonstrate reduced neural activity, whereas at greater task demands neural activity is increased.
Visual scanning behavior is related to recognition performance for own- and other-age faces
Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela
2015-01-01
It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056
Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V C; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R
2015-01-01
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage.
Scherer, Reinhold; Faller, Josef; Friedrich, Elisabeth V. C.; Opisso, Eloy; Costa, Ursula; Kübler, Andrea; Müller-Putz, Gernot R.
2015-01-01
Brain-computer interfaces (BCIs) translate oscillatory electroencephalogram (EEG) patterns into action. Different mental activities modulate spontaneous EEG rhythms in various ways. Non-stationarity and inherent variability of EEG signals, however, make reliable recognition of modulated EEG patterns challenging. Able-bodied individuals who use a BCI for the first time achieve - on average - binary classification performance of about 75%. Performance in users with central nervous system (CNS) tissue damage is typically lower. User training generally enhances reliability of EEG pattern generation and thus also robustness of pattern recognition. In this study, we investigated the impact of mental tasks on binary classification performance in BCI users with central nervous system (CNS) tissue damage such as persons with stroke or spinal cord injury (SCI). Motor imagery (MI), that is the kinesthetic imagination of movement (e.g. squeezing a rubber ball with the right hand), is the "gold standard" and mainly used to modulate EEG patterns. Based on our recent results in able-bodied users, we hypothesized that pair-wise combination of "brain-teaser" (e.g. mental subtraction and mental word association) and "dynamic imagery" (e.g. hand and feet MI) tasks significantly increases classification performance of induced EEG patterns in the selected end-user group. Within-day (How stable is the classification within a day?) and between-day (How well does a model trained on day one perform on unseen data of day two?) analysis of variability of mental task pair classification in nine individuals confirmed the hypothesis. We found that the use of the classical MI task pair hand vs. feed leads to significantly lower classification accuracy - in average up to 15% less - in most users with stroke or SCI. User-specific selection of task pairs was again essential to enhance performance. We expect that the gained evidence will significantly contribute to make imagery-based BCI technology become accessible to a larger population of users including individuals with special needs due to CNS damage. PMID:25992718
Social approach and emotion recognition in fragile X syndrome.
Williams, Tracey A; Porter, Melanie A; Langdon, Robyn
2014-03-01
Evidence is emerging that individuals with Fragile X syndrome (FXS) display emotion recognition deficits, which may contribute to their significant social difficulties. The current study investigated the emotion recognition abilities, and social approachability judgments, of FXS individuals when processing emotional stimuli. Relative to chronological age- (CA-) and mental age- (MA-) matched controls, the FXS group performed significantly more poorly on the emotion recognition tasks, and displayed a bias towards detecting negative emotions. Moreover, after controlling for emotion recognition deficits, the FXS group displayed significantly reduced ratings of social approachability. These findings suggest that a social anxiety pattern, rather than poor socioemotional processing, may best explain the social avoidance observed in FXS.
The use of global image characteristics for neural network pattern recognitions
NASA Astrophysics Data System (ADS)
Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.
2017-04-01
The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.
Honey, Garry D; O'loughlin, Chris; Turner, Danielle C; Pomarol-Clotet, Edith; Corlett, Philip R; Fletcher, Paul C
2006-02-01
Ketamine is increasingly used to model the cognitive deficits and symptoms of schizophrenia. We investigated the extent to which ketamine administration in healthy volunteers reproduces the deficits in episodic recognition memory and agency source monitoring reported in schizophrenia. Intravenous infusions of placebo or 100 ng/ml ketamine were administered to 12 healthy volunteers in a double-blind, placebo-controlled, randomized, within-subjects study. In response to presented words, the subject or experimenter performed a deep or shallow encoding task, providing a 2(drug) x 2(depth of processing) x 2(agency) factorial design. At test, subjects discriminated old/new words, and recalled the sources (task and agent). Data were analyzed using multinomial modelling to identify item recognition, source memory for agency and task, and guessing biases. Under ketamine, item recognition and cued recall of deeply encoded items were impaired, replicating previous findings. In contrast to schizophrenia, there was a reduced tendency to externalize agency source guessing biases under ketamine. While the recognition memory deficit observed with ketamine is consistent with previous work and with schizophrenia, the changes in source memory differ from those reported in schizophrenic patients. This difference may account for the pattern of psychopathology induced by ketamine.
Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel
2013-01-01
Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive interventions. PMID:23805122
Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.
Annett, J M; Leslie, J C
1996-08-01
Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.
Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall
Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat
2014-01-01
Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085
Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
Training Spiking Neural Models Using Artificial Bee Colony
Vazquez, Roberto A.; Garro, Beatriz A.
2015-01-01
Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy. PMID:25709644
Support for an auto-associative model of spoken cued recall: evidence from fMRI.
de Zubicaray, Greig; McMahon, Katie; Eastburn, Mathew; Pringle, Alan J; Lorenz, Lina; Humphreys, Michael S
2007-03-02
Cued recall and item recognition are considered the standard episodic memory retrieval tasks. However, only the neural correlates of the latter have been studied in detail with fMRI. Using an event-related fMRI experimental design that permits spoken responses, we tested hypotheses from an auto-associative model of cued recall and item recognition [Chappell, M., & Humphreys, M. S. (1994). An auto-associative neural network for sparse representations: Analysis and application to models of recognition and cued recall. Psychological Review, 101, 103-128]. In brief, the model assumes that cues elicit a network of phonological short term memory (STM) and semantic long term memory (LTM) representations distributed throughout the neocortex as patterns of sparse activations. This information is transferred to the hippocampus which converges upon the item closest to a stored pattern and outputs a response. Word pairs were learned from a study list, with one member of the pair serving as the cue at test. Unstudied words were also intermingled at test in order to provide an analogue of yes/no recognition tasks. Compared to incorrectly rejected studied items (misses) and correctly rejected (CR) unstudied items, correctly recalled items (hits) elicited increased responses in the left hippocampus and neocortical regions including the left inferior prefrontal cortex (LIPC), left mid lateral temporal cortex and inferior parietal cortex, consistent with predictions from the model. This network was very similar to that observed in yes/no recognition studies, supporting proposals that cued recall and item recognition involve common rather than separate mechanisms.
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-01-01
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824
Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi
2017-06-13
Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).
Rats Fed a Diet Rich in Fats and Sugars Are Impaired in the Use of Spatial Geometry.
Tran, Dominic M D; Westbrook, R Frederick
2015-12-01
A diet rich in fats and sugars is associated with cognitive deficits in people, and rodent models have shown that such a diet produces deficits on tasks assessing spatial learning and memory. Spatial navigation is guided by two distinct types of information: geometrical, such as distance and direction, and featural, such as luminance and pattern. To clarify the nature of diet-induced spatial impairments, we provided rats with standard chow supplemented with sugar water and a range of energy-rich foods eaten by people, and then we assessed their place- and object-recognition memory. Rats exposed to this diet performed comparably with control rats fed only chow on object recognition but worse on place recognition. This impairment on the place-recognition task was present after only a few days on the diet and persisted across tests. Critically, this spatial impairment was specific to the processing of distance and direction. © The Author(s) 2015.
Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.
Zhou, Pan; Lin, Zhouchen; Zhang, Chao
2016-05-01
Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method.
Talker variability in audio-visual speech perception
Heald, Shannon L. M.; Nusbaum, Howard C.
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919
Talker variability in audio-visual speech perception.
Heald, Shannon L M; Nusbaum, Howard C
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Event-related potential correlates of declarative and non-declarative sequence knowledge.
Ferdinand, Nicola K; Rünger, Dennis; Frensch, Peter A; Mecklinger, Axel
2010-07-01
The goal of the present study was to demonstrate that declarative and non-declarative knowledge acquired in an incidental sequence learning task contributes differentially to memory retrieval and leads to dissociable ERP signatures in a recognition memory task. For this purpose, participants performed a sequence learning task and were classified as verbalizers, partial verbalizers, or nonverbalizers according to their ability to verbally report the systematic response sequence. Thereafter, ERPs were recorded in a recognition memory task time-locked to sequence triplets that were either part of the previously learned sequence or not. Although all three groups executed old sequence triplets faster than new triplets in the recognition memory task, qualitatively distinct ERP patterns were found for participants with and without reportable knowledge. Verbalizers and, to a lesser extent, partial verbalizers showed an ERP correlate of recollection for parts of the incidentally learned sequence. In contrast, nonverbalizers showed a different ERP effect with a reverse polarity that might reflect priming. This indicates that an ensemble of qualitatively different processes is at work when declarative and non-declarative sequence knowledge is retrieved. By this, our findings favor a multiple-systems view postulating that explicit and implicit learning are supported by different and functionally independent systems. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.
Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung
2007-05-01
This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
NASA Technical Reports Server (NTRS)
Knasel, T. Michael
1996-01-01
The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.
Oyedotun, Oyebade K; Khashman, Adnan
2017-02-01
Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria's Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.
Remote sensing of agricultural crops and soils
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator)
1982-01-01
Research results and accomplishments of sixteen tasks in the following areas are described: (1) corn and soybean scene radiation research; (2) soil moisture research; (3) sampling and aggregation research; (4) pattern recognition and image registration research; and (5) computer and data base services.
Biologically inspired emotion recognition from speech
NASA Astrophysics Data System (ADS)
Caponetti, Laura; Buscicchio, Cosimo Alessandro; Castellano, Giovanna
2011-12-01
Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. Specifically, emotion classification is performed using a long short-term memory (LSTM) recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. We propose to represent data using features derived from two different models: mel-frequency cepstral coefficients (MFCC) and the Lyon cochlear model. In the experimental phase, results obtained from the LSTM network and the two different feature sets are compared, showing that features derived from the Lyon cochlear model give better recognition results in comparison with those obtained with the traditional MFCC representation.
Actively Paranoid Patients with Schizophrenia Over Attribute Anger to Neutral Faces
Pinkham, Amy E.; Brensinger, Colleen; Kohler, Christian; Gur, Raquel E.; Gur, Ruben C.
2010-01-01
Previous investigations of the influence of paranoia on facial affect recognition in schizophrenia have been inconclusive as some studies demonstrate better performance for paranoid relative to non-paranoid patients and others show that paranoid patients display greater impairments. These studies have been limited by small sample sizes and inconsistencies in the criteria used to define groups. Here, we utilized an established emotion recognition task and a large sample to examine differential performance in emotion recognition ability between patients who were actively paranoid (AP) and those who were not actively paranoid (NAP). Accuracy and error patterns on the Penn Emotion Recognition test (ER40) were examined in 132 patients (64 NAP and 68 AP). Groups were defined based on the presence of paranoid ideation at the time of testing rather than diagnostic subtype. AP and NAP patients did not differ in overall task accuracy; however, an emotion by group interaction indicated that AP patients were significantly worse than NAP patients at correctly labeling neutral faces. A comparison of error patterns on neutral stimuli revealed that the groups differed only in misattributions of anger expressions, with AP patients being significantly more likely to misidentify a neutral expression as angry. The present findings suggest that paranoia is associated with a tendency to over attribute threat to ambiguous stimuli and also lend support to emerging hypotheses of amygdala hyperactivation as a potential neural mechanism for paranoid ideation. PMID:21112186
Shang, Chi-Yung; Gau, Susan Shur-Fen
2012-10-01
Atomoxetine is efficacious in reducing symptoms of attention- deficit/hyperactivity disorder (ADHD), but its effect on visual memory and attention needs more investigation. This study aimed to assess the effect of atomoxetine on visual memory, attention, and school function in boys with ADHD in Taiwan. This was an open-label 12 week atomoxetine treatment trial among 30 drug-naíve boys with ADHD, aged 8-16 years. Before administration of atomoxetine, the participants were assessed using psychiatric interviews, the Wechsler Intelligence Scale for Children, 3rd edition (WISC-III), the school function of the Chinese version of the Social Adjustment Inventory for Children and Adolescents (SAICA), the Conners' Continuous Performance Test (CPT), and the tasks of the Cambridge Neuropsychological Test Automated Battery (CANTAB) involving visual memory and attention: Pattern Recognition Memory, Spatial Recognition Memory, and Reaction Time, which were reassessed at weeks 4 and 12. Our results showed there was significant improvement in pattern recognition memory and spatial recognition memory as measured by the CANTAB tasks, sustained attention and response inhibition as measured by the CPT, and reaction time as measured by the CANTAB after treatment with atomoxetine for 4 weeks or 12 weeks. In addition, atomoxetine significantly enhanced school functioning in children with ADHD. Our findings suggested that atomoxetine was associated with significant improvement in visual memory, attention, and school functioning in boys with ADHD.
Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D
2016-01-27
Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.
Covert face recognition in congenital prosopagnosia: a group study.
Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max
2012-03-01
Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.
Pattern recognition tool based on complex network-based approach
NASA Astrophysics Data System (ADS)
Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir
2013-02-01
This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.
Miller, Vonda H; Jansen, Ben H
2008-12-01
Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.
Generalization between canonical and non-canonical views in object recognition
Ghose, Tandra; Liu, Zili
2013-01-01
Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692
Mutual information-based facial expression recognition
NASA Astrophysics Data System (ADS)
Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah
2013-12-01
This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.
Handwritten digits recognition based on immune network
NASA Astrophysics Data System (ADS)
Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe
2011-11-01
With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Katagiri, Fumiaki; Glazebrook, Jane
2003-01-01
A major task in computational analysis of mRNA expression profiles is definition of relationships among profiles on the basis of similarities among them. This is generally achieved by pattern recognition in the distribution of data points representing each profile in a high-dimensional space. Some drawbacks of commonly used pattern recognition algorithms stem from their use of a globally linear space and/or limited degrees of freedom. A pattern recognition method called Local Context Finder (LCF) is described here. LCF uses nonlinear dimensionality reduction for pattern recognition. Then it builds a network of profiles based on the nonlinear dimensionality reduction results. LCF was used to analyze mRNA expression profiles of the plant host Arabidopsis interacting with the bacterial pathogen Pseudomonas syringae. In one case, LCF revealed two dimensions essential to explain the effects of the NahG transgene and the ndr1 mutation on resistant and susceptible responses. In another case, plant mutants deficient in responses to pathogen infection were classified on the basis of LCF analysis of their profiles. The classification by LCF was consistent with the results of biological characterization of the mutants. Thus, LCF is a powerful method for extracting information from expression profile data. PMID:12960373
Facial and prosodic emotion recognition in social anxiety disorder.
Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei
2017-07-01
Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Employing wavelet-based texture features in ammunition classification
NASA Astrophysics Data System (ADS)
Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.
2017-05-01
Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques
Kempton, Matthew J; Haldane, Morgan; Jogia, Jigar; Christodoulou, Tessa; Powell, John; Collier, David; Williams, Steven C R; Frangou, Sophia
2009-04-01
The functional catechol-O-methyltransferase (COMT Val108/158Met) polymorphism has been shown to have an impact on tasks of executive function, memory and attention and recently, tasks with an affective component. As oestrogen reduces COMT activity, we focused on the interaction between gender and COMT genotype on brain activations during an affective processing task. We used functional MRI (fMRI) to record brain activations from 74 healthy subjects who engaged in a facial affect recognition task; subjects viewed and identified fearful compared to neutral faces. There was no main effect of the COMT polymorphism, gender or genotypexgender interaction on task performance. We found a significant effect of gender on brain activations in the left amygdala and right temporal pole, where females demonstrated increased activations over males. Within these regions, Val/Val carriers showed greater signal magnitude compared to Met/Met carriers, particularly in females. The COMT Val108/158Met polymorphism impacts on gender-related patterns of activation in limbic and paralimbic regions but the functional significance of any oestrogen-related COMT inhibition appears modest.
Artificial neural network detects human uncertainty
NASA Astrophysics Data System (ADS)
Hramov, Alexander E.; Frolov, Nikita S.; Maksimenko, Vladimir A.; Makarov, Vladimir V.; Koronovskii, Alexey A.; Garcia-Prieto, Juan; Antón-Toro, Luis Fernando; Maestú, Fernando; Pisarchik, Alexander N.
2018-03-01
Artificial neural networks (ANNs) are known to be a powerful tool for data analysis. They are used in social science, robotics, and neurophysiology for solving tasks of classification, forecasting, pattern recognition, etc. In neuroscience, ANNs allow the recognition of specific forms of brain activity from multichannel EEG or MEG data. This makes the ANN an efficient computational core for brain-machine systems. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns of neural activity, the use of ANNs for recognition and classification of patterns in neural networks still requires additional attention, especially in ambiguous situations. According to this, in this research, we demonstrate the efficiency of application of the ANN for classification of human MEG trials corresponding to the perception of bistable visual stimuli with different degrees of ambiguity. We show that along with classification of brain states associated with multistable image interpretations, in the case of significant ambiguity, the ANN can detect an uncertain state when the observer doubts about the image interpretation. With the obtained results, we describe the possible application of ANNs for detection of bistable brain activity associated with difficulties in the decision-making process.
Pattern learning with deep neural networks in EMG-based speech recognition.
Wand, Michael; Schultz, Tanja
2014-01-01
We report on classification of phones and phonetic features from facial electromyographic (EMG) data, within the context of our EMG-based Silent Speech interface. In this paper we show that a Deep Neural Network can be used to perform this classification task, yielding a significant improvement over conventional Gaussian Mixture models. Our central contribution is the visualization of patterns which are learned by the neural network. With increasing network depth, these patterns represent more and more intricate electromyographic activity.
Facial emotion recognition and borderline personality pathology.
Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio
2017-09-01
The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.
Ar, Ilktan; Akgul, Yusuf Sinan
2014-11-01
Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained.
Does the cost function matter in Bayes decision rule?
Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann
2012-02-01
In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.
Quantifying facial expression recognition across viewing conditions.
Goren, Deborah; Wilson, Hugh R
2006-04-01
Facial expressions are key to social interactions and to assessment of potential danger in various situations. Therefore, our brains must be able to recognize facial expressions when they are transformed in biologically plausible ways. We used synthetic happy, sad, angry and fearful faces to determine the amount of geometric change required to recognize these emotions during brief presentations. Five-alternative forced choice conditions involving central viewing, peripheral viewing and inversion were used to study recognition among the four emotions. Two-alternative forced choice was used to study affect discrimination when spatial frequency information in the stimulus was modified. The results show an emotion and task-dependent pattern of detection. Facial expressions presented with low peak frequencies are much harder to discriminate from neutral than faces defined by either mid or high peak frequencies. Peripheral presentation of faces also makes recognition much more difficult, except for happy faces. Differences between fearful detection and recognition tasks are probably due to common confusions with sadness when recognizing fear from among other emotions. These findings further support the idea that these emotions are processed separately from each other.
Functional differences among those high and low on a trait measure of psychopathy.
Gordon, Heather L; Baird, Abigail A; End, Alison
2004-10-01
It has been established that individuals who score high on measures of psychopathy demonstrate difficulty when performing tasks requiring the interpretation of other's emotional states. The aim of this study was to elucidate the relation of emotion and cognition to individual differences on a standard psychopathy personality inventory (PPI) among a nonpsychiatric population. Twenty participants completed the PPI. Following survey completion, a mean split of their scores on the emotional-interpersonal factor was performed, and participants were placed into a high or low group. Functional magnetic resonance imaging data were collected while participants performed a recognition task that required attention be given to either the affect or identity of target stimuli. No significant behavioral differences were found. In response to the affect recognition task, significant differences between high- and low-scoring subjects were observed in several subregions of the frontal cortex, as well as the amygdala. No significant differences were found between the groups in response to the identity recognition condition. Results indicate that participants scoring high on the PPI, although not behaviorally distinct, demonstrate a significantly different pattern of neural activity (as measured by blood oxygen level-dependent contrast)in response to tasks that require affective processing. The results suggest a unique neural signature associated with personality differences in a nonpsychiatric population.
Evans, Simon; Dowell, Nicholas G; Tabet, Naji; King, Sarah L; Hutton, Samuel B; Rusted, Jennifer M
2017-02-01
The APOE e4 allele has been linked to poorer cognitive aging and enhanced dementia risk. Previous imaging studies have used subsequent memory paradigms to probe hippocampal function in e4 carriers across the age range, and evidence suggests a pattern of hippocampal overactivation in young adult e4 carriers. In this study, we employed a word-based subsequent memory task under fMRI; pupillometry data were also acquired as an index of cognitive effort. Participants (26 non-e4 carriers and 28 e4 carriers) performed an incidental encoding task (presented as word categorization), followed by a surprise old/new recognition task after a 40 minute delay. In e4 carriers only, subsequently remembered words were linked to increased hippocampal activity. Across all participants, increased pupil diameter differentiated subsequently remembered from forgotten words, and neural activity covaried with pupil diameter in cuneus and precuneus. These effects were weaker in e4 carriers, and e4 carriers did not show greater pupil diameter to remembered words. In the recognition phase, genotype status also modulated hippocampal activity: here, however, e4 carriers failed to show the conventional pattern of greater hippocampal activity to novel words. Overall, neural activity changes were unstable in e4 carriers, failed to respond to novelty, and did not link strongly to cognitive effort, as indexed by pupil diameter. This provides further evidence of abnormal hippocampal recruitment in young adult e4 carriers, manifesting as both up and downregulation of neural activity, in the absence of behavioral performance differences.
fMRI of parents of children with Asperger Syndrome: a pilot study.
Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd; Williams, Steve; Brammer, Mick; Bullmore, Ed
2006-06-01
People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the 'broader autism phenotype.' (1) To test if parents of children with AS show atypical brain activity during a visual search and an empathy task; (2) to test for sex differences during these tasks at the neural level; (3) to test if parents of children with autism are hyper-masculinized, as might be predicted by the 'extreme male brain' theory. We used fMRI during a visual search task (the Embedded Figures Test (EFT)) and an emotion recognition test (the 'Reading the Mind in the Eyes' (or Eyes) test). Twelve parents of children with AS, vs. 12 sex-matched controls. Factorial analysis was used to map main effects of sex, group (parents vs. controls), and sexxgroup interaction on brain function. An ordinal ANOVA also tested for regions of brain activity where females>males>fathers=mothers, to test for parental hyper-masculinization. RESULTS ON EFT TASK: Female controls showed more activity in extrastriate cortex than male controls, and both mothers and fathers showed even less activity in this area than sex-matched controls. There were no differences in group activation between mothers and fathers of children with AS. The ordinal ANOVA identified two specific regions in visual cortex (right and left, respectively) that showed the pattern Females>Males>Fathers=Mothers, both in BA 19. RESULTS ON EYES TASK: Male controls showed more activity in the left inferior frontal gyrus than female controls, and both mothers and fathers showed even more activity in this area compared to sex-matched controls. Female controls showed greater bilateral inferior frontal activation than males. This was not seen when comparing mothers to males, or mothers to fathers. The ordinal ANOVA identified two specific regions that showed the pattern Females>Males>Mothers=Fathers: left medial temporal gyrus (BA 21) and left dorsolateral prefrontal cortex (BA 44). Parents of children with AS show atypical brain function during both visual search and emotion recognition, in the direction of hyper-masculinization of the brain. Because of the small sample size, and lack of age-matching between parents and controls, such results constitute a pilot study that needs replicating with larger samples.
[Learning virtual routes: what does verbal coding do in working memory?].
Gyselinck, Valérie; Grison, Élise; Gras, Doriane
2015-03-01
Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Briggs-Gowan, Margaret J.; Voss, Joel L.; Petitclerc, Amelie; McCarthy, Kimberly; Blair, R. James R.; Wakschlag, Lauren S.
2016-01-01
Introduction Callous-unemotional (CU) traits in the presence of conduct problems are associated with increased risk of severe antisocial behavior. Developmentally sensitive methods of assessing CU traits have recently been generated, but their construct validity in relation to neurocognitive underpinnings of CU has not been demonstrated. The current study sought to investigate whether the fear-specific emotion recognition deficits associated with CU traits in older individuals are developmentally expressed in young children as low concern for others and punishment insensitivity. Methods A sub-sample of 337 preschoolers (mean age 4.8 years [SD=.8]) who completed neurocognitive tasks was taken from a larger project of preschool psychopathology. Children completed an emotional recognition task in which they were asked to identify the emotional face from the neutral faces in an array. CU traits were assessed using the Low Concern (LC) and Punishment Insensitivity (PI) subscales of the Multidimensional Assessment Profile of Disruptive Behavior (MAP-DB), which were specifically designed to differentiate the normative misbehavior of early childhood from atypical patterns. Results High LC, but not PI, scores were associated with a fear-specific deficit in emotion recognition. Girls were more accurate than boys in identifying emotional expressions but no significant interaction between LC or PI and sex was observed. Conclusions Fear recognition deficits associated with CU traits in older individuals were observed in preschoolers with developmentally-defined patterns of low concern for others. Confirming that the link between CU-related impairments in empathy and distinct neurocognitive deficits is present in very young children suggests that developmentally-specified measurement can detect the substrates of these severe behavioral patterns beginning much earlier than prior work. Exploring the development of CU traits and disruptive behavior disorders at very early ages may provide insights critical to early intervention and prevention of severe antisocial behavior. PMID:27167866
Elfman, Kane W; Aly, Mariam; Yonelinas, Andrew P
2014-12-01
Recent evidence suggests that the hippocampus, a region critical for long-term memory, also supports certain forms of high-level visual perception. A seemingly paradoxical finding is that, unlike the thresholded hippocampal signals associated with memory, the hippocampus produces graded, strength-based signals in perception. This article tests a neurocomputational model of the hippocampus, based on the complementary learning systems framework, to determine if the same model can account for both memory and perception, and whether it produces the appropriate thresholded and strength-based signals in these two types of tasks. The simulations showed that the hippocampus, and most prominently the CA1 subfield, produced graded signals when required to discriminate between highly similar stimuli in a perception task, but generated thresholded patterns of activity in recognition memory. A threshold was observed in recognition memory because pattern completion occurred for only some trials and completely failed to occur for others; conversely, in perception, pattern completion always occurred because of the high degree of item similarity. These results offer a neurocomputational account of the distinct hippocampal signals associated with perception and memory, and are broadly consistent with proposals that CA1 functions as a comparator of expected versus perceived events. We conclude that the hippocampal computations required for high-level perceptual discrimination are congruous with current neurocomputational models that account for recognition memory, and fit neatly into a broader description of the role of the hippocampus for the processing of complex relational information. © 2014 Wiley Periodicals, Inc.
Activity recognition from minimal distinguishing subsequence mining
NASA Astrophysics Data System (ADS)
Iqbal, Mohammad; Pao, Hsing-Kuo
2017-08-01
Human activity recognition is one of the most important research topics in the era of Internet of Things. To separate different activities given sensory data, we utilize a Minimal Distinguishing Subsequence (MDS) mining approach to efficiently find distinguishing patterns among different activities. We first transform the sensory data into a series of sensor triggering events and operate the MDS mining procedure afterwards. The gap constraints are also considered in the MDS mining. Given the multi-class nature of most activity recognition tasks, we modify the MDS mining approach from a binary case to a multi-class one to fit the need for multiple activity recognition. We also study how to select the best parameter set including the minimal and the maximal support thresholds in finding the MDSs for effective activity recognition. Overall, the prediction accuracy is 86.59% on the van Kasteren dataset which consists of four different activities for recognition.
Schizophrenia patients demonstrate a dissociation on declarative and non-declarative memory tests.
Perry, W; Light, G A; Davis, H; Braff, D L
2000-12-15
Declarative memory refers to the recall and recognition of factual information. In contrast, non-declarative memory entails a facilitation of memory based on prior exposure and is typically assessed with priming and perceptual-motor sequencing tasks. In this study, schizophrenia patients were compared to normal comparison subjects on two computerized memory tasks: the Word-stem Priming Test (n=30) and the Pattern Sequence Learning Test (n=20). Word-stem Priming includes recall, recognition (declarative) and priming (non-declarative) components of memory. The schizophrenia patients demonstrated an impaired performance on recall of words with relative improvement during the recognition portion of the test. Furthermore, they performed normally on the priming portion of the test. Thus, on tests of declarative memory, the patients had retrieval deficits with intact performance on the non-declarative memory component. The Pattern Sequence Learning Test utilizes a serial reaction time paradigm to assess non-declarative memory. The schizophrenia patients' serial reaction time was significantly slower than that of comparison subjects. However, the patients' rate of acquisition was not different from the normal comparison group. The data suggest that patients with schizophrenia process more slowly than normal, but have an intact non-declarative memory. The schizophrenia patients' dissociation on declarative vs. non-declarative memory tests is discussed in terms of possible underlying structural impairment.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
Children's Metacognitive Judgments in an Eyewitness Identification Task
ERIC Educational Resources Information Center
Keast, Amber; Brewer, Neil; Wells, Gary L.
2007-01-01
Two experiments examined children's metacognitive monitoring of recognition judgments within an eyewitness identification paradigm. A confidence-accuracy (CA) calibration approach was used to examine patterns of calibration, over-/underconfidence, and resolution. In Experiment 1, children (n=619, mean age=11 years 10 months) and adults (n=600)…
Lexical Processes in the Recognition of Japanese Horizontal and Vertical Compounds
ERIC Educational Resources Information Center
Miwa, Koji; Dijkstra, Ton
2017-01-01
This lexical decision eye-tracking study investigated whether horizontal and vertical readings elicit comparable behavioral patterns and whether reading directions modulate lexical processes. Response times and eye movements were recorded during a lexical decision task with Japanese bimorphemic compound words presented vertically. The data were…
Actively paranoid patients with schizophrenia over attribute anger to neutral faces.
Pinkham, Amy E; Brensinger, Colleen; Kohler, Christian; Gur, Raquel E; Gur, Ruben C
2011-02-01
Previous investigations of the influence of paranoia on facial affect recognition in schizophrenia have been inconclusive as some studies demonstrate better performance for paranoid relative to non-paranoid patients and others show that paranoid patients display greater impairments. These studies have been limited by small sample sizes and inconsistencies in the criteria used to define groups. Here, we utilized an established emotion recognition task and a large sample to examine differential performance in emotion recognition ability between patients who were actively paranoid (AP) and those who were not actively paranoid (NAP). Accuracy and error patterns on the Penn Emotion Recognition test (ER40) were examined in 132 patients (64 NAP and 68 AP). Groups were defined based on the presence of paranoid ideation at the time of testing rather than diagnostic subtype. AP and NAP patients did not differ in overall task accuracy; however, an emotion by group interaction indicated that AP patients were significantly worse than NAP patients at correctly labeling neutral faces. A comparison of error patterns on neutral stimuli revealed that the groups differed only in misattributions of anger expressions, with AP patients being significantly more likely to misidentify a neutral expression as angry. The present findings suggest that paranoia is associated with a tendency to over attribute threat to ambiguous stimuli and also lend support to emerging hypotheses of amygdala hyperactivation as a potential neural mechanism for paranoid ideation. Copyright © 2010 Elsevier B.V. All rights reserved.
Impaired Word and Face Recognition in Older Adults with Type 2 Diabetes.
Jones, Nicola; Riby, Leigh M; Smith, Michael A
2016-07-01
Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.
Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.
Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M
2017-01-01
Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).
St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.
2012-01-01
There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in response to sleep loss. PMID:22959616
Comparison of Object Recognition Behavior in Human and Monkey
Rajalingham, Rishi; Schmidt, Kailyn
2015-01-01
Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324
Compositional symbol grounding for motor patterns.
Greco, Alberto; Caneva, Claudio
2010-01-01
We developed a new experimental and simulative paradigm to study the establishing of compositional grounded representations for motor patterns. Participants learned to associate non-sense arm motor patterns, performed in three different hand postures, with non-sense words. There were two group conditions: in the first (compositional), each pattern was associated with a two-word (verb-adverb) sentence; in the second (holistic), each same pattern was associated with a unique word. Two experiments were performed. In the first, motor pattern recognition and naming were tested in the two conditions. Results showed that verbal compositionality had no role in recognition and that the main source of confusability in this task came from discriminating hand postures. As the naming task resulted too difficult, some changes in the learning procedure were implemented in the second experiment. In this experiment, the compositional group achieved better results in naming motor patterns especially for patterns where hand postures discrimination was relevant. In order to ascertain the differential effect, upon this result, of memory load and of systematic grounding, neural network simulations were also made. After a basic simulation that worked as a good model of subjects performance, in following simulations the number of stimuli (motor patterns and words) was increased and the systematic association between words and patterns was disrupted, while keeping the same number of words and syntax. Results showed that in both conditions the advantage for the compositional condition significantly increased. These simulations showed that the advantage for this condition may be more related to the systematicity rather than to the mere informational gain. All results are discussed in connection to the possible support of the hypothesis of a compositional motor representation and toward a more precise explanation of the factors that make compositional representations working.
Differential theory of learning for efficient neural network pattern recognition
NASA Astrophysics Data System (ADS)
Hampshire, John B., II; Vijaya Kumar, Bhagavatula
1993-09-01
We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.
Differential theory of learning for efficient neural network pattern recognition
NASA Astrophysics Data System (ADS)
Hampshire, John B., II; Vijaya Kumar, Bhagavatula
1993-08-01
We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.
Vander Lugt correlation of DNA sequence data
NASA Astrophysics Data System (ADS)
Christens-Barry, William A.; Hawk, James F.; Martin, James C.
1990-12-01
DNA, the molecule containing the genetic code of an organism, is a linear chain of subunits. It is the sequence of subunits, of which there are four kinds, that constitutes the unique blueprint of an individual. This sequence is the focus of a large number of analyses performed by an army of geneticists, biologists, and computer scientists. Most of these analyses entail searches for specific subsequences within the larger set of sequence data. Thus, most analyses are essentially pattern recognition or correlation tasks. Yet, there are special features to such analysis that influence the strategy and methods of an optical pattern recognition approach. While the serial processing employed in digital electronic computers remains the main engine of sequence analyses, there is no fundamental reason that more efficient parallel methods cannot be used. We describe an approach using optical pattern recognition (OPR) techniques based on matched spatial filtering. This allows parallel comparison of large blocks of sequence data. In this study we have simulated a Vander Lugt1 architecture implementing our approach. Searches for specific target sequence strings within a block of DNA sequence from the Co/El plasmid2 are performed.
Inconsistent emotion recognition deficits across stimulus modalities in Huntington׳s disease.
Rees, Elin M; Farmer, Ruth; Cole, James H; Henley, Susie M D; Sprengelmeyer, Reiner; Frost, Chris; Scahill, Rachael I; Hobbs, Nicola Z; Tabrizi, Sarah J
2014-11-01
Recognition of negative emotions is impaired in Huntington׳s Disease (HD). It is unclear whether these emotion-specific problems are driven by dissociable cognitive deficits, emotion complexity, test cue difficulty, or visuoperceptual impairments. This study set out to further characterise emotion recognition in HD by comparing patterns of deficits across stimulus modalities; notably including for the first time in HD, the more ecologically and clinically relevant modality of film clips portraying dynamic facial expressions. Fifteen early HD and 17 control participants were tested on emotion recognition from static facial photographs, non-verbal vocal expressions and one second dynamic film clips, all depicting different emotions. Statistically significant evidence of impairment of anger, disgust and fear recognition was seen in HD participants compared with healthy controls across multiple stimulus modalities. The extent of the impairment, as measured by the difference in the number of errors made between HD participants and controls, differed according to the combination of emotion and modality (p=0.013, interaction test). The largest between-group difference was seen in the recognition of anger from film clips. Consistent with previous reports, anger, disgust and fear were the most poorly recognised emotions by the HD group. This impairment did not appear to be due to task demands or expression complexity as the pattern of between-group differences did not correspond to the pattern of errors made by either group; implicating emotion-specific cognitive processing pathology. There was however evidence that the extent of emotion recognition deficits significantly differed between stimulus modalities. The implications in terms of designing future tests of emotion recognition and care giving are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dance recognition system using lower body movement.
Simpson, Travis T; Wiesner, Susan L; Bennett, Bradford C
2014-02-01
The current means of locating specific movements in film necessitate hours of viewing, making the task of conducting research into movement characteristics and patterns tedious and difficult. This is particularly problematic for the research and analysis of complex movement systems such as sports and dance. While some systems have been developed to manually annotate film, to date no automated way of identifying complex, full body movement exists. With pattern recognition technology and knowledge of joint locations, automatically describing filmed movement using computer software is possible. This study used various forms of lower body kinematic analysis to identify codified dance movements. We created an algorithm that compares an unknown move with a specified start and stop against known dance moves. Our recognition method consists of classification and template correlation using a database of model moves. This system was optimized to include nearly 90 dance and Tai Chi Chuan movements, producing accurate name identification in over 97% of trials. In addition, the program had the capability to provide a kinematic description of either matched or unmatched moves obtained from classification recognition.
The aftermath of memory retrieval for recycling visual working memory representations.
Park, Hyung-Bum; Zhang, Weiwei; Hyun, Joo-Seok
2017-07-01
We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)-namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the "recycling" of VWM representations.
The effect of encoding strategy on the neural correlates of memory for faces.
Bernstein, Lori J; Beig, Sania; Siegenthaler, Amy L; Grady, Cheryl L
2002-01-01
Encoding and recognition of unfamiliar faces in young adults were examined using positron emission tomography to determine whether different encoding strategies would lead to encoding/retrieval differences in brain activity. Three types of encoding were compared: a 'deep' task (judging pleasantness/unpleasantness), a 'shallow' task (judging right/left orientation), and an intentional learning task in which subjects were instructed to learn the faces for a subsequent memory test but were not provided with a specific strategy. Memory for all faces was tested with an old/new recognition test. A modest behavioral effect was obtained, with deeply-encoded faces being recognized more accurately than shallowly-encoded or intentionally-learned faces. Regardless of encoding strategy, encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. Within encoding, the type of strategy produced different brain activity patterns, with deep encoding being characterized by left amygdala and left anterior cingulate activation. There was no effect of encoding strategy on brain activity during the recognition conditions. Posterior fusiform gyrus activation was related to better recognition accuracy in those conditions encouraging perceptual strategies, whereas activity in left frontal and temporal areas correlated with better performance during the 'deep' condition. Results highlight three important aspects of face memory: (1) the effect of encoding strategy was seen only at encoding and not at recognition; (2) left inferior prefrontal cortex was engaged during encoding of faces regardless of strategy; and (3) differential activity in fusiform gyrus was found, suggesting that activity in this area is not only a result of automatic face processing but is modulated by controlled processes.
Lima, César F; Garrett, Carolina; Castro, São Luís
2013-01-01
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
Multitasking During Degraded Speech Recognition in School-Age Children
Ward, Kristina M.; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890
Multitasking During Degraded Speech Recognition in School-Age Children.
Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
Recognition of Schematic Facial Displays of Emotion in Parents of Children with Autism
ERIC Educational Resources Information Center
Palermo, Mark T.; Pasqualetti, Patrizio; Barbati, Giulia; Intelligente, Fabio; Rossini, Paolo Maria
2006-01-01
Performance on an emotional labeling task in response to schematic facial patterns representing five basic emotions without the concurrent presentation of a verbal category was investigated in 40 parents of children with autism and 40 matched controls. "Autism fathers" performed worse than "autism mothers," who performed worse than controls in…
Functional Magnetic Resonance Imaging of Cognitive Processing in Young Adults with Down Syndrome
ERIC Educational Resources Information Center
Jacola, Lisa M.; Byars, Anna W.; Chalfonte-Evans, Melinda; Schmithorst, Vincent J.; Hickey, Fran; Patterson, Bonnie; Hotze, Stephanie; Vannest, Jennifer; Chiu, Chung-Yiu; Holland, Scott K.; Schapiro, Mark B.
2011-01-01
The authors used functional magnetic resonance imaging (fMRI) to investigate neural activation during a semantic-classification/object-recognition task in 13 persons with Down syndrome and 12 typically developing control participants (age range = 12-26 years). A comparison between groups suggested atypical patterns of brain activation for the…
Working Memory Impairment in People with Williams Syndrome: Effects of Delay, Task and Stimuli
ERIC Educational Resources Information Center
O'Hearn, Kirsten; Courtney, Susan; Street, Whitney; Landau, Barbara
2009-01-01
Williams syndrome (WS) is a neurodevelopmental disorder associated with impaired visuospatial representations subserved by the dorsal stream and relatively strong object recognition abilities subserved by the ventral stream. There is conflicting evidence on whether this uneven pattern in WS extends to working memory (WM). The present studies…
Fast Morphological Effects in First and Second Language Word Recognition
ERIC Educational Resources Information Center
Diependaele, Kevin; Dunabeitia, Jon Andoni; Morris, Joanna; Keuleers, Emmanuel
2011-01-01
In three experiments we compared the performance of native English speakers to that of Spanish-English and Dutch-English bilinguals on a masked morphological priming lexical decision task. The results do not show significant differences across the three experiments. In line with recent meta-analyses, we observed a graded pattern of facilitation…
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
NASA Astrophysics Data System (ADS)
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network.
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-03-21
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices' non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing.
A Nanotechnology-Ready Computing Scheme based on a Weakly Coupled Oscillator Network
Vodenicarevic, Damir; Locatelli, Nicolas; Abreu Araujo, Flavio; Grollier, Julie; Querlioz, Damien
2017-01-01
With conventional transistor technologies reaching their limits, alternative computing schemes based on novel technologies are currently gaining considerable interest. Notably, promising computing approaches have proposed to leverage the complex dynamics emerging in networks of coupled oscillators based on nanotechnologies. The physical implementation of such architectures remains a true challenge, however, as most proposed ideas are not robust to nanotechnology devices’ non-idealities. In this work, we propose and investigate the implementation of an oscillator-based architecture, which can be used to carry out pattern recognition tasks, and which is tailored to the specificities of nanotechnologies. This scheme relies on a weak coupling between oscillators, and does not require a fine tuning of the coupling values. After evaluating its reliability under the severe constraints associated to nanotechnologies, we explore the scalability of such an architecture, suggesting its potential to realize pattern recognition tasks using limited resources. We show that it is robust to issues like noise, variability and oscillator non-linearity. Defining network optimization design rules, we show that nano-oscillator networks could be used for efficient cognitive processing. PMID:28322262
Creating a meaningful visual perception in blind volunteers by optic nerve stimulation
NASA Astrophysics Data System (ADS)
Brelén, M. E.; Duret, F.; Gérard, B.; Delbeke, J.; Veraart, C.
2005-03-01
A blind volunteer, suffering from retinitis pigmentosa, has been chronically implanted with an optic nerve visual prosthesis. Vision rehabilitation with this volunteer has concentrated on the development of a stimulation strategy according to which video camera images are converted into stimulation pulses. The aim is to convey as much information as possible about the visual scene within the limits of the device's capabilities. Pattern recognition tasks were used to assess the effectiveness of the stimulation strategy. The results demonstrate how even a relatively basic algorithm can efficiently convey useful information regarding the visual scene. By increasing the number of phosphenes used in the algorithm, better performance is observed but a longer training period is required. After a learning period, the volunteer achieved a pattern recognition score of 85% at 54 s on average per pattern. After nine evaluation sessions, when using a stimulation strategy exploiting all available phosphenes, no saturation effect has yet been observed.
Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification
Pham, Tuan D.
2014-01-01
The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744
Handwritten recognition of Tamil vowels using deep learning
NASA Astrophysics Data System (ADS)
Ram Prashanth, N.; Siddarth, B.; Ganesh, Anirudh; Naveen Kumar, Vaegae
2017-11-01
We come across a large volume of handwritten texts in our daily lives and handwritten character recognition has long been an important area of research in pattern recognition. The complexity of the task varies among different languages and it so happens largely due to the similarity between characters, distinct shapes and number of characters which are all language-specific properties. There have been numerous works on character recognition of English alphabets and with laudable success, but regional languages have not been dealt with very frequently and with similar accuracies. In this paper, we explored the performance of Deep Belief Networks in the classification of Handwritten Tamil vowels, and conclusively compared the results obtained. The proposed method has shown satisfactory recognition accuracy in light of difficulties faced with regional languages such as similarity between characters and minute nuances that differentiate them. We can further extend this to all the Tamil characters.
Vibrotactile pattern recognition: a portable compact tactile matrix.
Thullier, Francine; Bolmont, Benoît; Lestienne, Francis G
2012-02-01
Compact tactile matrix (CTM) is a vibrotactile device composed of a seven-by-seven array of electromechanical vibrators "tactip" used to represent tactile patterns applied to a small skin area. The CTM uses a dynamic feature to generate spatiotemporal tactile patterns. The design requirements focus particularly on maximizing the transmission of the vibration from one tactip to the others as well as to the skin over a square area of 16 cm (2) while simultaneously minimizing the transmission of vibrations throughout the overall structure of the CTM. Experiments were conducted on 22 unpracticed subjects to evaluate how the CTM could be used to develop a tactile semantics for communication of instructions in order to test the ability of the subjects to identify: 1) directional prescriptors for gesture guidance and 2) instructional commands for operational task requirements in a military context. The results indicate that, after familiarization, recognition accuracies in the tactile patterns were remarkably precise for more 80% of the subjects. © 2011 IEEE
Kauppi, Jukka-Pekka; Martikainen, Kalle; Ruotsalainen, Ulla
2010-12-01
The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns. Copyright © 2010 Elsevier Ltd. All rights reserved.
The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words
Hoedemaker, Renske S.; Gordon, Peter C.
2016-01-01
In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394
The onset and time course of semantic priming during rapid recognition of visual words.
Hoedemaker, Renske S; Gordon, Peter C
2017-05-01
In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Assessment of Self-Recognition in Young Children with Handicaps.
ERIC Educational Resources Information Center
Kelley, Michael F.; And Others
1988-01-01
Thirty young children with handicaps were assessed on five self-recognition mirror tasks. The set of tasks formed a reproducible scale, indicating that these tasks are an appropriate measure of self-recognition in this population. Data analysis suggested that stage of self-recognition is positively and significantly related to cognitive…
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Cultural differences in self-recognition: the early development of autonomous and related selves?
Ross, Josephine; Yilmaz, Mandy; Dale, Rachel; Cassidy, Rose; Yildirim, Iraz; Suzanne Zeedyk, M
2017-05-01
Fifteen- to 18-month-old infants from three nationalities were observed interacting with their mothers and during two self-recognition tasks. Scottish interactions were characterized by distal contact, Zambian interactions by proximal contact, and Turkish interactions by a mixture of contact strategies. These culturally distinct experiences may scaffold different perspectives on self. In support, Scottish infants performed best in a task requiring recognition of the self in an individualistic context (mirror self-recognition), whereas Zambian infants performed best in a task requiring recognition of the self in a less individualistic context (body-as-obstacle task). Turkish infants performed similarly to Zambian infants on the body-as-obstacle task, but outperformed Zambians on the mirror self-recognition task. Verbal contact (a distal strategy) was positively related to mirror self-recognition and negatively related to passing the body-as-obstacle task. Directive action and speech (proximal strategies) were negatively related to mirror self-recognition. Self-awareness performance was best predicted by cultural context; autonomous settings predicted success in mirror self-recognition, and related settings predicted success in the body-as-obstacle task. These novel data substantiate the idea that cultural factors may play a role in the early expression of self-awareness. More broadly, the results highlight the importance of moving beyond the mark test, and designing culturally sensitive tests of self-awareness. © 2016 John Wiley & Sons Ltd.
Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe
2014-01-01
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.
Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M
2013-02-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Cultural Differences in Gaze and Emotion Recognition: Americans Contrast More than Chinese
Tehan Stanley, Jennifer; Zhang, Xin; Fung, Helene H.; Isaacowitz, Derek M.
2014-01-01
We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye tracking data suggest that, for some emotions, Americans attended more to the target faces and made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PMID:22889414
NASA Astrophysics Data System (ADS)
Iqtait, M.; Mohamad, F. S.; Mamat, M.
2018-03-01
Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.
Tc1 mouse model of trisomy-21 dissociates properties of short- and long-term recognition memory.
Hall, Jessica H; Wiseman, Frances K; Fisher, Elizabeth M C; Tybulewicz, Victor L J; Harwood, John L; Good, Mark A
2016-04-01
The present study examined memory function in Tc1 mice, a transchromosomic model of Down syndrome (DS). Tc1 mice demonstrated an unusual delay-dependent deficit in recognition memory. More specifically, Tc1 mice showed intact immediate (30sec), impaired short-term (10-min) and intact long-term (24-h) memory for objects. A similar pattern was observed for olfactory stimuli, confirming the generality of the pattern across sensory modalities. The specificity of the behavioural deficits in Tc1 mice was confirmed using APP overexpressing mice that showed the opposite pattern of object memory deficits. In contrast to object memory, Tc1 mice showed no deficit in either immediate or long-term memory for object-in-place information. Similarly, Tc1 mice showed no deficit in short-term memory for object-location information. The latter result indicates that Tc1 mice were able to detect and react to spatial novelty at the same delay interval that was sensitive to an object novelty recognition impairment. These results demonstrate (1) that novelty detection per se and (2) the encoding of visuo-spatial information was not disrupted in adult Tc1 mice. The authors conclude that the task specific nature of the short-term recognition memory deficit suggests that the trisomy of genes on human chromosome 21 in Tc1 mice impacts on (perirhinal) cortical systems supporting short-term object and olfactory recognition memory. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
fMRI of Parents of Children with Asperger Syndrome: A Pilot Study
ERIC Educational Resources Information Center
Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd, Williams, Steve; Brammer, Mick; Bullmore, Ed
2006-01-01
Background: People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the "broader autism phenotype." Aims: (1) To test if parents of children with AS show atypical brain activity during a visual search…
Morey, Candice Coker; Cowan, Nelson; Morey, Richard D; Rouder, Jeffery N
2011-02-01
Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual array comparison task and a tone sequence comparison task. In the critical conditions, an increase in reward for one task corresponded to a decrease in reward for the concurrent task, but memory load remained constant. Our results show patterns of interference consistent with a trade-off between the tasks, suggesting that a shared resource can be flexibly divided, rather than only fully allotted to either of the tasks. Our findings support a role for a domain-general resource in models of working memory, and furthermore suggest that this resource is flexibly divisible.
Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.
Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S
2016-03-01
To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.
Is talking to an automated teller machine natural and fun?
Chan, F Y; Khalid, H M
Usability and affective issues of using automatic speech recognition technology to interact with an automated teller machine (ATM) are investigated in two experiments. The first uncovered dialogue patterns of ATM users for the purpose of designing the user interface for a simulated speech ATM system. Applying the Wizard-of-Oz methodology, multiple mapping and word spotting techniques, the speech driven ATM accommodates bilingual users of Bahasa Melayu and English. The second experiment evaluates the usability of a hybrid speech ATM, comparing it with a simulated manual ATM. The aim is to investigate how natural and fun can talking to a speech ATM be for these first-time users. Subjects performed the withdrawal and balance enquiry tasks. The ANOVA was performed on the usability and affective data. The results showed significant differences between systems in the ability to complete the tasks as well as in transaction errors. Performance was measured on the time taken by subjects to complete the task and the number of speech recognition errors that occurred. On the basis of user emotions, it can be said that the hybrid speech system enabled pleasurable interaction. Despite the limitations of speech recognition technology, users are set to talk to the ATM when it becomes available for public use.
How color enhances visual memory for natural scenes.
Spence, Ian; Wong, Patrick; Rusan, Maria; Rastegar, Naghmeh
2006-01-01
We offer a framework for understanding how color operates to improve visual memory for images of the natural environment, and we present an extensive data set that quantifies the contribution of color in the encoding and recognition phases. Using a continuous recognition task with colored and monochrome gray-scale images of natural scenes at short exposure durations, we found that color enhances recognition memory by conferring an advantage during encoding and by strengthening the encoding-specificity effect. Furthermore, because the pattern of performance was similar at all exposure durations, and because form and color are processed in different areas of cortex, the results imply that color must be bound as an integral part of the representation at the earliest stages of processing.
Martin, Chris B; Mirsattari, Seyed M; Pruessner, Jens C; Pietrantonio, Sandra; Burneo, Jorge G; Hayman-Abello, Brent; Köhler, Stefan
2012-11-01
In déjà vu, a phenomenological impression of familiarity for the current visual environment is experienced with a sense that it should in fact not feel familiar. The fleeting nature of this phenomenon in daily life, and the difficulty in developing experimental paradigms to elicit it, has hindered progress in understanding déjà vu. Some neurological patients with temporal-lobe epilepsy (TLE) consistently experience déjà vu at the onset of their seizures. An investigation of such patients offers a unique opportunity to shed light on its possible underlying mechanisms. In the present study, we sought to determine whether unilateral TLE patients with déjà vu (TLE+) show a unique pattern of interictal memory deficits that selectively affect familiarity assessment. In Experiment 1, we employed a Remember-Know paradigm for categorized visual scenes and found evidence for impairments that were limited to familiarity-based responses. In Experiment 2, we administered an exclusion task for highly similar categorized visual scenes that placed both recognition processes in opposition. TLE+ patients again displayed recognition impairments, and these impairments spared their ability to engage recollective processes so as to counteract familiarity. The selective deficits we observed in TLE+ patients contrasted with the broader pattern of recognition-memory impairments that was present in a control group of unilateral patients without déjà vu (TLE-). MRI volumetry revealed that ipsilateral medial temporal structures were less broadly affected in TLE+ than in TLE- patients, with a trend for more focal volume reductions in the rhinal cortices of the TLE+ group. The current findings establish a first empirical link between déjà vu in TLE and processes of familiarity assessment, as defined and measured in current cognitive models. They also reveal a pattern of selectivity in recognition impairments that is rarely observed and, thus, of significant theoretical interest to the memory literature at large. Copyright © 2012 Elsevier Ltd. All rights reserved.
Common constraints limit Korean and English character recognition in peripheral vision.
He, Yingchen; Kwon, MiYoung; Legge, Gordon E
2018-01-01
The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition.
Common constraints limit Korean and English character recognition in peripheral vision
He, Yingchen; Kwon, MiYoung; Legge, Gordon E.
2018-01-01
The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition. PMID:29327041
Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding
Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping
2015-01-01
Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771
Evaluation of Anomaly Detection Method Based on Pattern Recognition
NASA Astrophysics Data System (ADS)
Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke
The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.
Pattern recognition with "materials that compute".
Fang, Yan; Yashin, Victor V; Levitan, Steven P; Balazs, Anna C
2016-09-01
Driven by advances in materials and computer science, researchers are attempting to design systems where the computer and material are one and the same entity. Using theoretical and computational modeling, we design a hybrid material system that can autonomously transduce chemical, mechanical, and electrical energy to perform a computational task in a self-organized manner, without the need for external electrical power sources. Each unit in this system integrates a self-oscillating gel, which undergoes the Belousov-Zhabotinsky (BZ) reaction, with an overlaying piezoelectric (PZ) cantilever. The chemomechanical oscillations of the BZ gels deflect the PZ layer, which consequently generates a voltage across the material. When these BZ-PZ units are connected in series by electrical wires, the oscillations of these units become synchronized across the network, where the mode of synchronization depends on the polarity of the PZ. We show that the network of coupled, synchronizing BZ-PZ oscillators can perform pattern recognition. The "stored" patterns are set of polarities of the individual BZ-PZ units, and the "input" patterns are coded through the initial phase of the oscillations imposed on these units. The results of the modeling show that the input pattern closest to the stored pattern exhibits the fastest convergence time to stable synchronization behavior. In this way, networks of coupled BZ-PZ oscillators achieve pattern recognition. Further, we show that the convergence time to stable synchronization provides a robust measure of the degree of match between the input and stored patterns. Through these studies, we establish experimentally realizable design rules for creating "materials that compute."
Pattern recognition with “materials that compute”
Fang, Yan; Yashin, Victor V.; Levitan, Steven P.; Balazs, Anna C.
2016-01-01
Driven by advances in materials and computer science, researchers are attempting to design systems where the computer and material are one and the same entity. Using theoretical and computational modeling, we design a hybrid material system that can autonomously transduce chemical, mechanical, and electrical energy to perform a computational task in a self-organized manner, without the need for external electrical power sources. Each unit in this system integrates a self-oscillating gel, which undergoes the Belousov-Zhabotinsky (BZ) reaction, with an overlaying piezoelectric (PZ) cantilever. The chemomechanical oscillations of the BZ gels deflect the PZ layer, which consequently generates a voltage across the material. When these BZ-PZ units are connected in series by electrical wires, the oscillations of these units become synchronized across the network, where the mode of synchronization depends on the polarity of the PZ. We show that the network of coupled, synchronizing BZ-PZ oscillators can perform pattern recognition. The “stored” patterns are set of polarities of the individual BZ-PZ units, and the “input” patterns are coded through the initial phase of the oscillations imposed on these units. The results of the modeling show that the input pattern closest to the stored pattern exhibits the fastest convergence time to stable synchronization behavior. In this way, networks of coupled BZ-PZ oscillators achieve pattern recognition. Further, we show that the convergence time to stable synchronization provides a robust measure of the degree of match between the input and stored patterns. Through these studies, we establish experimentally realizable design rules for creating “materials that compute.” PMID:27617290
Holcomb, H H; Medoff, D R; Caudill, P J; Zhao, Z; Lahti, A C; Dannals, R F; Tamminga, C A
1998-09-01
Tone recognition is partially subserved by neural activity in the right frontal and primary auditory cortices. First we determined the brain areas associated with tone perception and recognition. This study then examined how regional cerebral blood flow (rCBF) in these and other brain regions correlates with the behavioral characteristics of a difficult tone recognition task. rCBF changes were assessed using H2(15)O positron emission tomography. Subtraction procedures were used to localize significant change regions and correlational analyses were applied to determine how response times (RT) predicted rCBF patterns. Twelve trained normal volunteers were studied in three conditions: REST, sensory motor control (SMC) and decision (DEC). The SMC-REST contrast revealed bilateral activation of primary auditory cortices, cerebellum and bilateral inferior frontal gyri. DEC-SMC produced significant clusters in the right middle and inferior frontal gyri, insula and claustrum; the anterior cingulate gyrus and supplementary motor area; the left insula/claustrum; and the left cerebellum. Correlational analyses, RT versus rCBF from DEC scans, showed a positive correlation in right inferior and middle frontal cortex; rCBF in bilateral auditory cortices and cerebellum exhibited significant negative correlations with RT These changes suggest that neural activity in the right frontal, superior temporal and cerebellar regions shifts back and forth in magnitude depending on whether tone recognition RT is relatively fast or slow, during a difficult, accurate assessment.
Intact anger recognition in depression despite aberrant visual facial information usage.
Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M
2014-08-01
Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.
Palmer, Daniel; Creighton, Samantha; Prado, Vania F; Prado, Marco A M; Choleris, Elena; Winters, Boyer D
2016-09-15
Substantial evidence implicates Acetylcholine (ACh) in the acquisition of object memories. While most research has focused on the role of the cholinergic basal forebrain and its cortical targets, there are additional cholinergic networks that may contribute to object recognition. The striatum contains an independent cholinergic network comprised of interneurons. In the current study, we investigated the role of this cholinergic signalling in object recognition using mice deficient for Vesicular Acetylcholine Transporter (VAChT) within interneurons of the striatum. We tested whether these striatal VAChT(D2-Cre-flox/flox) mice would display normal short-term (5 or 15min retention delay) and long-term (3h retention delay) object recognition memory. In a home cage object recognition task, male and female VAChT(D2-Cre-flox/flox) mice were impaired selectively with a 15min retention delay. When tested on an object location task, VAChT(D2-Cre-flox/flox) mice displayed intact spatial memory. Finally, when object recognition was tested in a Y-shaped apparatus, designed to minimize the influence of spatial and contextual cues, only females displayed impaired recognition with a 5min retention delay, but when males were challenged with a 15min retention delay, they were also impaired; neither males nor females were impaired with the 3h delay. The pattern of results suggests that striatal cholinergic transmission plays a role in the short-term memory for object features, but not spatial location. Copyright © 2016 Elsevier B.V. All rights reserved.
Impairment in face processing in autism spectrum disorder: a developmental perspective.
Greimel, Ellen; Schulte-Rüther, Martin; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin
2014-09-01
Findings on face identity and facial emotion recognition in autism spectrum disorder (ASD) are inconclusive. Moreover, little is known about the developmental trajectory of face processing skills in ASD. Taking a developmental perspective, the aim of this study was to extend previous findings on face processing skills in a sample of adolescents and adults with ASD. N = 38 adolescents and adults (13-49 years) with high-functioning ASD and n = 37 typically developing (TD) control subjects matched for age and IQ participated in the study. Moreover, n = 18 TD children between the ages of 8 and 12 were included to address the question whether face processing skills in ASD follow a delayed developmental pattern. Face processing skills were assessed using computerized tasks of face identity recognition (FR) and identification of facial emotions (IFE). ASD subjects showed impaired performance on several parameters of the FR and IFE task compared to TD control adolescents and adults. Whereas TD adolescents and adults outperformed TD children in both tasks, performance in ASD adolescents and adults was similar to the group of TD children. Within the groups of ASD and control adolescents and adults, no age-related changes in performance were found. Our findings corroborate and extend previous studies showing that ASD is characterised by broad impairments in the ability to process faces. These impairments seem to reflect a developmentally delayed pattern that remains stable throughout adolescence and adulthood.
The relationship between speech recognition, behavioural listening effort, and subjective ratings.
Picou, Erin M; Ricketts, Todd A
2018-06-01
The purpose of this study was to evaluate the reliability and validity of four subjective questions related to listening effort. A secondary purpose of this study was to evaluate the effects of hearing aid beamforming microphone arrays on word recognition and listening effort. Participants answered subjective questions immediately following testing in a dual-task paradigm with three microphone settings in a moderately reverberant laboratory environment in two noise configurations. Participants rated their: (1) mental work, (2) desire to improve the situation, (3) tiredness, and (4) desire to give up. Data were analysed using repeated measures and reliability analyses. Eighteen adults with symmetrical sensorineural hearing loss participated. Beamforming differentially affected word recognition and listening effort. Analysis revealed the same pattern of results for behavioural listening effort and subjective ratings of desire to improve the situation. Conversely, ratings of work revealed the same pattern of results as word recognition performance. Ratings of tiredness and desire to give up were unaffected by hearing aid microphone or noise configuration. Participant ratings of their desire to control the listening situation appear to reliable subjective indicators of listening effort that align with results from a behavioural measure of listening effort.
NASA Astrophysics Data System (ADS)
Nasertdinova, A. D.; Bochkarev, V. V.
2017-11-01
Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.
Effect of minimal/mild hearing loss on children's speech understanding in a simulated classroom.
Lewis, Dawna E; Valente, Daniel L; Spalding, Jody L
2015-01-01
While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.
Batterink, Laura; Neville, Helen
2011-11-01
The vast majority of word meanings are learned simply by extracting them from context rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M-). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M- words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M- words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time window compared with M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, whereas implicit representations may require more extensive exposure or more time to emerge.
Video content analysis of surgical procedures.
Loukas, Constantinos
2018-02-01
In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.
Gasperini, Filippo; Brizzolara, Daniela; Cristofani, Paola; Casalini, Claudia; Chilosi, Anna Maria
2014-01-01
Children with Developmental Dyslexia (DD) are impaired in Rapid Automatized Naming (RAN) tasks, where subjects are asked to name arrays of high frequency items as quickly as possible. However the reasons why RAN speed discriminates DD from typical readers are not yet fully understood. Our study was aimed to identify some of the cognitive mechanisms underlying RAN-reading relationship by comparing one group of 32 children with DD with an age-matched control group of typical readers on a naming and a visual recognition task both using a discrete-trial methodology, in addition to a serial RAN task, all using the same stimuli (digits and colors). Results showed a significant slowness of DD children in both serial and discrete-trial naming (DN) tasks regardless of type of stimulus, but no difference between the two groups on the discrete-trial recognition task. Significant differences between DD and control participants in the RAN task disappeared when performance in the DN task was partialled out by covariance analysis for colors, but not for digits. The same pattern held in a subgroup of DD subjects with a history of early language delay (LD). By contrast, in a subsample of DD children without LD the RAN deficit was specific for digits and disappeared after slowness in DN was partialled out. Slowness in DN was more evident for LD than for noLD DD children. Overall, our results confirm previous evidence indicating a name-retrieval deficit as a cognitive impairment underlying RAN slowness in DD children. This deficit seems to be more marked in DD children with previous LD. Moreover, additional cognitive deficits specifically associated with serial RAN tasks have to be taken into account when explaining deficient RAN speed of these latter children. We suggest that partially different cognitive dysfunctions underpin superficially similar RAN impairments in different subgroups of DD subjects. PMID:25237301
Social Experience Does Not Abolish Cultural Diversity in Eye Movements
Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto
2011-01-01
Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626
Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.
Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong
2016-01-01
This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.
Electrophysiological distinctions between recognition memory with and without awareness
Ko, Philip C.; Duda, Bryant; Hussey, Erin P.; Ally, Brandon A.
2013-01-01
The influence of implicit memory representations on explicit recognition may help to explain cases of accurate recognition decisions made with high uncertainty. During a recognition task, implicit memory may enhance the fluency of a test item, biasing decision processes to endorse it as “old”. This model may help explain recognition-without-identification, a remarkable phenomenon in which participants make highly accurate recognition decisions despite the inability to identify the test item. The current study investigated whether recognition-without-identification for pictures elicits a similar pattern of neural activity as other types of accurate recognition decisions made with uncertainty. Further, this study also examined whether recognition-without-identification for pictures could be attained by the use of perceptual and conceptual information from memory. To accomplish this, participants studied pictures and then performed a recognition task under difficult viewing conditions while event-related potentials (ERPs) were recorded. Behavioral results showed that recognition was highly accurate even when test items could not be identified, demonstrating recognition-without identification. The behavioral performance also indicated that recognition-without-identification was mediated by both perceptual and conceptual information, independently of one another. The ERP results showed dramatically different memory related activity during the early 300 to 500 ms epoch for identified items that were studied compared to unidentified items that were studied. Similar to previous work highlighting accurate recognition without retrieval awareness, test items that were not identified, but correctly endorsed as “old,” elicited a negative posterior old/new effect (i.e., N300). In contrast, test items that were identified and correctly endorsed as “old,” elicited the classic positive frontal old/new effect (i.e., FN400). Importantly, both of these effects were elicited under conditions when participants used perceptual information to make recognition decisions. Conceptual information elicited very different ERPs than perceptual information, showing that the informational wealth of pictures can evoke multiple routes to recognition even without awareness of memory retrieval. These results are discussed within the context of current theories regarding the N300 and the FN400. PMID:23287567
Emotion recognition in Parkinson's disease: Static and dynamic factors.
Wasser, Cory I; Evans, Felicity; Kempnich, Clare; Glikmann-Johnston, Yifat; Andrews, Sophie C; Thyagarajan, Dominic; Stout, Julie C
2018-02-01
The authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants. In a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks. Groups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls. PD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Electrophysiologically dissociating episodic preretrieval processing.
Bridger, Emma K; Mecklinger, Axel
2012-06-01
Contrasts between ERPs elicited by new items from tests with distinct episodic retrieval requirements index preretrieval processing. Preretrieval operations are thought to facilitate the recovery of task-relevant information because they have been shown to correlate with response accuracy in tasks in which prioritizing the retrieval of this information could be a useful strategy. This claim was tested here by contrasting new item ERPs from two retrieval tasks, each designed to explicitly require the recovery of a different kind of mnemonic information. New item ERPs differed from 400 msec poststimulus, but the distribution of these effects varied markedly, depending upon participants' response accuracy: A protracted posteriorly located effect was present for higher performing participants, whereas an anteriorly distributed effect occurred for lower performing participants. The magnitude of the posterior effect from 400 to 800 msec correlated with response accuracy, supporting the claim that preretrieval processes facilitate the recovery of task-relevant information. Additional contrasts between ERPs from these tasks and an old/new recognition task operating as a relative baseline revealed task-specific effects with nonoverlapping scalp topographies, in line with the assumption that these new item ERP effects reflect qualitatively distinct retrieval operations. Similarities in these effects were also used to reason about preretrieval processes related to the general requirement to recover contextual details. These insights, alongside the distinct pattern of effects for the two accuracy groups, reveal the multifarious nature of preretrieval processing while indicating that only some of these classes of operation are systematically related to response accuracy in recognition memory tasks.
Sparse network-based models for patient classification using fMRI
Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina
2015-01-01
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459
Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A
2015-02-01
In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.
Family environment influences emotion recognition following paediatric traumatic brain injury.
Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S
2010-01-01
This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.
Puzzle test: A tool for non-analytical clinical reasoning assessment.
Monajemi, Alireza; Yaghmaei, Minoo
2016-01-01
Most contemporary clinical reasoning tests typically assess non-automatic thinking. Therefore, a test is needed to measure automatic reasoning or pattern recognition, which has been largely neglected in clinical reasoning tests. The Puzzle Test (PT) is dedicated to assess automatic clinical reasoning in routine situations. This test has been introduced first in 2009 by Monajemi et al in the Olympiad for Medical Sciences Students.PT is an item format that has gained acceptance in medical education, but no detailed guidelines exist for this test's format, construction and scoring. In this article, a format is described and the steps to prepare and administer valid and reliable PTs are presented. PT examines a specific clinical reasoning task: Pattern recognition. PT does not replace other clinical reasoning assessment tools. However, it complements them in strategies for assessing comprehensive clinical reasoning.
Comparing the Frequency Effect Between the Lexical Decision and Naming Tasks in Chinese
Wu, Jei-Tun
2016-01-01
In psycholinguistic research, the frequency effect can be one of the indicators for eligible experimental tasks that examine the nature of lexical access. Usually, only one of those tasks is chosen to examine lexical access in a study. Using two exemplar experiments, this paper introduces an approach to include both the lexical decision task and the naming task in a study. In the first experiment, the stimuli were Chinese characters with frequency and regularity manipulated. In the second experiment, the stimuli were switched to Chinese two-character words, in which the word frequency and the regularity of the leading character were manipulated. The logic of these two exemplar experiments was to explore some important issues such as the role of phonology on recognition by comparing the frequency effect between both the tasks. The results revealed different patterns of lexical access from those reported in the alphabetic systems. The results of Experiment 1 manifested a larger frequency effect in the naming task as compared to the LDT, when the stimuli were Chinese characters. And it is noteworthy that, in Experiment 1, when the stimuli were regular Chinese characters, the frequency effect observed in the naming task was roughly equivalent to that in the LDT. However, a smaller frequency effect was shown in the naming task as compared to the LDT, when the stimuli were switched to Chinese two-character words in Experiment 2. Taking advantage of the respective demands and characteristics in both tasks, researchers can obtain a more complete and precise picture of character/word recognition. PMID:27077703
Autonomous planning and scheduling on the TechSat 21 mission
NASA Technical Reports Server (NTRS)
Sherwood, R.; Chien, S.; Castano, R.; Rabideau, G.
2002-01-01
The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting.
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field
Luo, Gang; Min, Wanli
2007-01-01
Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884
Global Neural Pattern Similarity as a Common Basis for Categorization and Recognition Memory
Xue, Gui; Love, Bradley C.; Preston, Alison R.; Poldrack, Russell A.
2014-01-01
Familiarity, or memory strength, is a central construct in models of cognition. In previous categorization and long-term memory research, correlations have been found between psychological measures of memory strength and activation in the medial temporal lobes (MTLs), which suggests a common neural locus for memory strength. However, activation alone is insufficient for determining whether the same mechanisms underlie neural function across domains. Guided by mathematical models of categorization and long-term memory, we develop a theory and a method to test whether memory strength arises from the global similarity among neural representations. In human subjects, we find significant correlations between global similarity among activation patterns in the MTLs and both subsequent memory confidence in a recognition memory task and model-based measures of memory strength in a category learning task. Our work bridges formal cognitive theories and neuroscientific models by illustrating that the same global similarity computations underlie processing in multiple cognitive domains. Moreover, by establishing a link between neural similarity and psychological memory strength, our findings suggest that there may be an isomorphism between psychological and neural representational spaces that can be exploited to test cognitive theories at both the neural and behavioral levels. PMID:24872552
An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.
Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V
2018-04-01
Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.
Introducing memory and association mechanism into a biologically inspired visual model.
Qiao, Hong; Li, Yinlin; Tang, Tang; Wang, Peng
2014-09-01
A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.
NASA Astrophysics Data System (ADS)
Jordanić, Mislav; Rojas-Martínez, Mónica; Mañanas, Miguel Angel; Francesc Alonso, Joan
2016-08-01
Objective. The development of modern assistive and rehabilitation devices requires reliable and easy-to-use methods to extract neural information for control of devices. Group-specific pattern recognition identifiers are influenced by inter-subject variability. Based on high-density EMG (HD-EMG) maps, our research group has already shown that inter-subject muscle activation patterns exist in a population of healthy subjects. The aim of this paper is to analyze muscle activation patterns associated with four tasks (flexion/extension of the elbow, and supination/pronation of the forearm) at three different effort levels in a group of patients with incomplete Spinal Cord Injury (iSCI). Approach. Muscle activation patterns were evaluated by the automatic identification of these four isometric tasks along with the identification of levels of voluntary contractions. Two types of classifiers were considered in the identification: linear discriminant analysis and support vector machine. Main results. Results show that performance of classification increases when combining features extracted from intensity and spatial information of HD-EMG maps (accuracy = 97.5%). Moreover, when compared to a population with injuries at different levels, a lower variability between activation maps was obtained within a group of patients with similar injury suggesting stronger task-specific and effort-level-specific co-activation patterns, which enable better prediction results. Significance. Despite the challenge of identifying both the four tasks and the three effort levels in patients with iSCI, promising results were obtained which support the use of HD-EMG features for providing useful information regarding motion and force intention.
Neural networks: Alternatives to conventional techniques for automatic docking
NASA Technical Reports Server (NTRS)
Vinz, Bradley L.
1994-01-01
Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.
Magnetic Tunnel Junction Mimics Stochastic Cortical Spiking Neurons
NASA Astrophysics Data System (ADS)
Sengupta, Abhronil; Panda, Priyadarshini; Wijesinghe, Parami; Kim, Yusung; Roy, Kaushik
2016-07-01
Brain-inspired computing architectures attempt to mimic the computations performed in the neurons and the synapses in the human brain in order to achieve its efficiency in learning and cognitive tasks. In this work, we demonstrate the mapping of the probabilistic spiking nature of pyramidal neurons in the cortex to the stochastic switching behavior of a Magnetic Tunnel Junction in presence of thermal noise. We present results to illustrate the efficiency of neuromorphic systems based on such probabilistic neurons for pattern recognition tasks in presence of lateral inhibition and homeostasis. Such stochastic MTJ neurons can also potentially provide a direct mapping to the probabilistic computing elements in Belief Networks for performing regenerative tasks.
Global ensemble texture representations are critical to rapid scene perception.
Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A
2017-06-01
Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Ingles, Janet L; Fisk, John D; Fleetwood, Ian; Burrell, Steven; Darvesh, Sultan
2014-03-01
Clinical analyses of patients with acquired dysgraphia provide unique opportunities to understand the cognitive and neural organization of written language production. We report J.B., a 50-year-old woman with peripheral dysgraphia who had prominent dissociations in her ability to write in lowercase versus uppercase and print versus cursive. We gave J.B. a series of tasks that evaluated her skills at writing uppercase and lowercase print and cursive, spelling aloud and in writing, writing numbers and symbols, and visual letter recognition and imagery. She was impaired in printing letters, with lowercase more affected than uppercase, but her cursive writing was relatively intact. This pattern was consistent across letter, word, and nonword writing tasks. She was unimpaired on tasks assessing her visual recognition and imagery of lowercase and uppercase letters. Her writing of numbers was preserved. J.B.'s handwriting disorder was accompanied by a central phonological dysgraphia. Our findings indicate functional independence of graphomotor programs for print and cursive letter styles and for letters and numbers. We discuss the relationship between peripheral and central writing disorders.
Yan, Xiaoqian; Andrews, Timothy J; Jenkins, Rob; Young, Andrew W
2016-01-01
Perceptual advantages for own-race compared to other-race faces have been demonstrated for the recognition of facial identity and expression. However, these effects have not been investigated in the same study with measures that can determine the extent of cross-cultural agreement as well as differences. To address this issue, we used a photo sorting task in which Chinese and Caucasian participants were asked to sort photographs of Chinese or Caucasian faces by identity or by expression. This paradigm matched the task demands of identity and expression recognition and avoided constrained forced-choice or verbal labelling requirements. Other-race effects of comparable magnitude were found across the identity and expression tasks. Caucasian participants made more confusion errors for the identities and expressions of Chinese than Caucasian faces, while Chinese participants made more confusion errors for the identities and expressions of Caucasian than Chinese faces. However, analyses of the patterns of responses across groups of participants revealed a considerable amount of underlying cross-cultural agreement. These findings suggest that widely repeated claims that members of other cultures "all look the same" overstate the cultural differences.
Advanced methods in NDE using machine learning approaches
NASA Astrophysics Data System (ADS)
Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank
2018-04-01
Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.
Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?
Haro, Juan; Ferré, Pilar
2018-06-01
It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.
Memory monitoring by animals and humans
NASA Technical Reports Server (NTRS)
Smith, J. D.; Shields, W. E.; Allendoerfer, K. R.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)
1998-01-01
The authors asked whether animals and humans would use similarly an uncertain response to escape indeterminate memories. Monkeys and humans performed serial probe recognition tasks that produced differential memory difficulty across serial positions (e.g., primacy and recency effects). Participants were given an escape option that let them avoid any trials they wished and receive a hint to the trial's answer. Across species, across tasks, and even across conspecifics with sharper or duller memories, monkeys and humans used the escape option selectively when more indeterminate memory traces were probed. Their pattern of escaping always mirrored the pattern of their primary memory performance across serial positions. Signal-detection analyses confirm the similarity of the animals' and humans' performances. Optimality analyses assess their efficiency. Several aspects of monkeys' performance suggest the cognitive sophistication of their decisions to escape.
Buratto, Luciano G.; Pottage, Claire L.; Brown, Charity; Morrison, Catriona M.; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated. PMID:25330251
Buratto, Luciano G; Pottage, Claire L; Brown, Charity; Morrison, Catriona M; Schaefer, Alexandre
2014-01-01
Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated.
Object Recognition Memory and the Rodent Hippocampus
ERIC Educational Resources Information Center
Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.
2010-01-01
In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…
A Four–Component Model of Age–Related Memory Change
Healey, M. Karl; Kahana, Michael J.
2015-01-01
We develop a novel, computationally explicit, theory of age–related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that includes aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates four components: 1) the ability to sustain attention across an encoding episode, 2) the ability to retrieve contextual representations for use as retrieval cues, 3) the ability to monitor retrievals and reject intrusions, and 4) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the four–component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus we provide a four–component theory of a complex pattern of age differences across two key laboratory tasks. PMID:26501233
How similar are recognition memory and inductive reasoning?
Hayes, Brett K; Heit, Evan
2013-07-01
Conventionally, memory and reasoning are seen as different types of cognitive activities driven by different processes. In two experiments, we challenged this view by examining the relationship between recognition memory and inductive reasoning involving multiple forms of similarity. A common study set (members of a conjunctive category) was followed by a test set containing old and new category members, as well as items that matched the study set on only one dimension. The study and test sets were presented under recognition or induction instructions. In Experiments 1 and 2, the inductive property being generalized was varied in order to direct attention to different dimensions of similarity. When there was no time pressure on decisions, patterns of positive responding were strongly affected by property type, indicating that different types of similarity were driving recognition and induction. By comparison, speeded judgments showed weaker property effects and could be explained by generalization based on overall similarity. An exemplar model, GEN-EX (GENeralization from EXamples), could account for both the induction and recognition data. These findings show that induction and recognition share core component processes, even when the tasks involve flexible forms of similarity.
Songbirds use spectral shape, not pitch, for sound pattern recognition
Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.
2016-01-01
Humans easily recognize “transposed” musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition. PMID:26811447
Family environment influences emotion recognition following paediatric traumatic brain injury
SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.
2011-01-01
Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900
Mier, Daniela; Eisenacher, Sarah; Rausch, Franziska; Englisch, Susanne; Gerchen, Martin Fungisai; Zamoscik, Vera; Meyer-Lindenberg, Andreas; Zink, Mathias; Kirsch, Peter
2017-10-01
Schizophrenia is associated with significant impairments in social cognition. These impairments have been shown to go along with altered activation of the posterior superior temporal sulcus (pSTS). However, studies that investigate connectivity of pSTS during social cognition in schizophrenia are sparse. Twenty-two patients with schizophrenia and 22 matched healthy controls completed a social-cognitive task for functional magnetic resonance imaging that allows the investigation of affective Theory of Mind (ToM), emotion recognition and the processing of neutral facial expressions. Moreover, a resting-state measurement was taken. Patients with schizophrenia performed worse in the social-cognitive task (main effect of group). In addition, a group by social-cognitive processing interaction was revealed for activity, as well as for connectivity during the social-cognitive task, i.e., patients with schizophrenia showed hyperactivity of right pSTS during neutral face processing, but hypoactivity during emotion recognition and affective ToM. In addition, hypoconnectivity between right and left pSTS was revealed for affective ToM, but not for neutral face processing or emotion recognition. No group differences in connectivity from right to left pSTS occurred during resting state. This pattern of aberrant activity and connectivity of the right pSTS during social cognition might form the basis of false-positive perceptions of emotions and intentions and could contribute to the emergence and sustainment of delusions.
Kinnavane, L; Amin, E; Horne, M; Aggleton, J P
2014-01-01
The present study examined immediate-early gene expression in the perirhinal cortex of rats with hippocampal lesions. The goal was to test those models of recognition memory which assume that the perirhinal cortex can function independently of the hippocampus. The c-fos gene was targeted, as its expression in the perirhinal cortex is strongly associated with recognition memory. Four groups of rats were examined. Rats with hippocampal lesions and their surgical controls were given either a recognition memory task (novel vs. familiar objects) or a relative recency task (objects with differing degrees of familiarity). Perirhinal Fos expression in the hippocampal-lesioned groups correlated with both recognition and recency performance. The hippocampal lesions, however, had no apparent effect on overall levels of perirhinal or entorhinal cortex c-fos expression in response to novel objects, with only restricted effects being seen in the recency condition. Network analyses showed that whereas the patterns of parahippocampal interactions were differentially affected by novel or familiar objects, these correlated networks were not altered by hippocampal lesions. Additional analyses in control rats revealed two modes of correlated medial temporal activation. Novel stimuli recruited the pathway from the lateral entorhinal cortex (cortical layer II or III) to hippocampal field CA3, and thence to CA1. Familiar stimuli recruited the direct pathway from the lateral entorhinal cortex (principally layer III) to CA1. The present findings not only reveal the independence from the hippocampus of some perirhinal systems associated with recognition memory, but also show how novel stimuli engage hippocampal subfields in qualitatively different ways from familiar stimuli. PMID:25264133
Basic and complex emotion recognition in children with autism: cross-cultural findings.
Fridenson-Hayo, Shimrit; Berggren, Steve; Lassalle, Amandine; Tal, Shahar; Pigat, Delia; Bölte, Sven; Baron-Cohen, Simon; Golan, Ofer
2016-01-01
Children with autism spectrum conditions (ASC) have emotion recognition deficits when tested in different expression modalities (face, voice, body). However, these findings usually focus on basic emotions, using one or two expression modalities. In addition, cultural similarities and differences in emotion recognition patterns in children with ASC have not been explored before. The current study examined the similarities and differences in the recognition of basic and complex emotions by children with ASC and typically developing (TD) controls across three cultures: Israel, Britain, and Sweden. Fifty-five children with high-functioning ASC, aged 5-9, were compared to 58 TD children. On each site, groups were matched on age, sex, and IQ. Children were tested using four tasks, examining recognition of basic and complex emotions from voice recordings, videos of facial and bodily expressions, and emotional video scenarios including all modalities in context. Compared to their TD peers, children with ASC showed emotion recognition deficits in both basic and complex emotions on all three modalities and their integration in context. Complex emotions were harder to recognize, compared to basic emotions for the entire sample. Cross-cultural agreement was found for all major findings, with minor deviations on the face and body tasks. Our findings highlight the multimodal nature of ER deficits in ASC, which exist for basic as well as complex emotions and are relatively stable cross-culturally. Cross-cultural research has the potential to reveal both autism-specific universal deficits and the role that specific cultures play in the way empathy operates in different countries.
Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition
NASA Astrophysics Data System (ADS)
Yin, Xi; Liu, Xiaoming
2018-02-01
This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.
The Costs and Benefits of Testing and Guessing on Recognition Memory
ERIC Educational Resources Information Center
Huff, Mark J.; Balota, David A.; Hutchison, Keith A.
2016-01-01
We examined whether 2 types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler…
Rapid Naming Speed and Chinese Character Recognition
ERIC Educational Resources Information Center
Liao, Chen-Huei; Georgiou, George K.; Parrila, Rauno
2008-01-01
We examined the relationship between rapid naming speed (RAN) and Chinese character recognition accuracy and fluency. Sixty-three grade 2 and 54 grade 4 Taiwanese children were administered four RAN tasks (colors, digits, Zhu-Yin-Fu-Hao, characters), and two character recognition tasks. RAN tasks accounted for more reading variance in grade 4 than…
How Fast is Famous Face Recognition?
Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.
2012-01-01
The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503
ERIC Educational Resources Information Center
Parks, Colleen M.
2013-01-01
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which…
ERIC Educational Resources Information Center
de Zubicaray, Greig I.; McMahon, Katie L.; Hayward, Lydia; Dunn, John C.
2011-01-01
In the present study, items pre-exposed in a familiarization series were included in a list discrimination task to manipulate memory strength. At test, participants were required to discriminate strong targets and strong lures from weak targets and new lures. This resulted in a concordant pattern of increased "old" responses to strong targets and…
Chemical Entity Recognition and Resolution to ChEBI
Grego, Tiago; Pesquita, Catia; Bastos, Hugo P.; Couto, Francisco M.
2012-01-01
Chemical entities are ubiquitous through the biomedical literature and the development of text-mining systems that can efficiently identify those entities are required. Due to the lack of available corpora and data resources, the community has focused its efforts in the development of gene and protein named entity recognition systems, but with the release of ChEBI and the availability of an annotated corpus, this task can be addressed. We developed a machine-learning-based method for chemical entity recognition and a lexical-similarity-based method for chemical entity resolution and compared them with Whatizit, a popular-dictionary-based method. Our methods outperformed the dictionary-based method in all tasks, yielding an improvement in F-measure of 20% for the entity recognition task, 2–5% for the entity-resolution task, and 15% for combined entity recognition and resolution tasks. PMID:25937941
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2017-12-01
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Georgiadis, Pantelis; Cavouras, Dionisis; Kalatzis, Ioannis; Glotsos, Dimitris; Athanasiadis, Emmanouil; Kostopoulos, Spiros; Sifaki, Koralia; Malamas, Menelaos; Nikiforidis, George; Solomou, Ekaterini
2009-01-01
Three-dimensional (3D) texture analysis of volumetric brain magnetic resonance (MR) images has been identified as an important indicator for discriminating among different brain pathologies. The purpose of this study was to evaluate the efficiency of 3D textural features using a pattern recognition system in the task of discriminating benign, malignant and metastatic brain tissues on T1 postcontrast MR imaging (MRI) series. The dataset consisted of 67 brain MRI series obtained from patients with verified and untreated intracranial tumors. The pattern recognition system was designed as an ensemble classification scheme employing a support vector machine classifier, specially modified in order to integrate the least squares features transformation logic in its kernel function. The latter, in conjunction with using 3D textural features, enabled boosting up the performance of the system in discriminating metastatic, malignant and benign brain tumors with 77.14%, 89.19% and 93.33% accuracy, respectively. The method was evaluated using an external cross-validation process; thus, results might be considered indicative of the generalization performance of the system to "unseen" cases. The proposed system might be used as an assisting tool for brain tumor characterization on volumetric MRI series.
A human performance evaluation of graphic symbol-design features.
Samet, M G; Geiselman, R E; Landee, B M
1982-06-01
16 subjects learned each of two tactical display symbol sets (conventional symbols and iconic symbols) in turn and were then shown a series of graphic displays containing various symbol configurations. For each display, the subject was asked questions corresponding to different behavioral processes relating to symbol use (identification, search, comparison, pattern recognition). The results indicated that: (a) conventional symbols yielded faster pattern-recognition performance than iconic symbols, and iconic symbols did not yield faster identification than conventional symbols, and (b) the portrayal of additional feature information (through the use of perimeter density or vector projection coding) slowed processing of the core symbol information in four tasks, but certain symbol-design features created less perceptual interference and had greater correspondence with the portrayal of specific tactical concepts than others. The results were discussed in terms of the complexities involved in the selection of symbol design features for use in graphic tactical displays.
Batterink, Laura; Neville, Helen
2011-01-01
The vast majority of word meanings are learned simply by extracting them from context, rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M−). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M− words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M− words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time-window compared to M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component (LPC) is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, while implicit representations may require more extensive exposure or more time to emerge. PMID:21452941
Brain activation for lexical decision and reading aloud: two sides of the same coin?
Carreiras, Manuel; Mechelli, Andrea; Estévez, Adelina; Price, Cathy J
2007-03-01
This functional magnetic resonance imaging study compared the neuronal implementation of word and pseudoword processing during two commonly used word recognition tasks: lexical decision and reading aloud. In the lexical decision task, participants made a finger-press response to indicate whether a visually presented letter string is a word or a pseudoword (e.g., "paple"). In the reading-aloud task, participants read aloud visually presented words and pseudowords. The same sets of words and pseudowords were used for both tasks. This enabled us to look for the effects of task (lexical decision vs. reading aloud), lexicality (words vs. nonwords), and the interaction of lexicality with task. We found very similar patterns of activation for lexical decision and reading aloud in areas associated with word recognition and lexical retrieval (e.g., left fusiform gyrus, posterior temporal cortex, pars opercularis, and bilateral insulae), but task differences were observed bilaterally in sensorimotor areas. Lexical decision increased activation in areas associated with decision making and finger tapping (bilateral postcentral gyri, supplementary motor area, and right cerebellum), whereas reading aloud increased activation in areas associated with articulation and hearing the sound of the spoken response (bilateral precentral gyri, superior temporal gyri, and posterior cerebellum). The effect of lexicality (pseudoword vs. words) was also remarkably consistent across tasks. Nevertheless, increased activation for pseudowords relative to words was greater in the left precentral cortex for reading than lexical decision, and greater in the right inferior frontal cortex for lexical decision than reading. We attribute these effects to differences in the demands on speech production and decision-making processes, respectively.
Recognition and reading aloud of kana and kanji word: an fMRI study.
Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao
2009-03-16
It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.
Zhao, Yongjia; Zhou, Suiping
2017-02-28
The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN's input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns.
Zhao, Yongjia; Zhou, Suiping
2017-01-01
The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns. PMID:28264503
Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; Sekine, Masashi; González, José; Gu, Dongyun; Chen, Weidong; Yu, Wenwei
2014-01-01
Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy.
Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; González, José; Gu, Dongyun; Yu, Wenwei
2014-01-01
Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy. PMID:25587560
Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.
Wieckowski, Andrea Trubanova; White, Susan W
2017-01-01
Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.
Challies, Danna M; Hunt, Maree; Garry, Maryanne; Harper, David N
2011-01-01
The misinformation effect is a term used in the cognitive psychological literature to describe both experimental and real-world instances in which misleading information is incorporated into an account of an historical event. In many real-world situations, it is not possible to identify a distinct source of misinformation, and it appears that the witness may have inferred a false memory by integrating information from a variety of sources. In a stimulus equivalence task, a small number of trained relations between some members of a class of arbitrary stimuli result in a large number of untrained, or emergent relations, between all members of the class. Misleading information was introduced into a simple memory task between a learning phase and a recognition test by means of a match-to-sample stimulus equivalence task that included both stimuli from the original learning task and novel stimuli. At the recognition test, participants given equivalence training were more likely to misidentify patterns than those who were not given such training. The misinformation effect was distinct from the effects of prior stimulus exposure, or partial stimulus control. In summary, stimulus equivalence processes may underlie some real-world manifestations of the misinformation effect. PMID:22084495
Stenbäck, Victoria; Hällgren, Mathias; Lyxell, Björn; Larsby, Birgitta
2015-06-01
Cognitive functions and speech-recognition-in-noise were evaluated with a cognitive test battery, assessing response inhibition using the Hayling task, working memory capacity (WMC) and verbal information processing, and an auditory test of speech recognition. The cognitive tests were performed in silence whereas the speech recognition task was presented in noise. Thirty young normally-hearing individuals participated in the study. The aim of the study was to investigate one executive function, response inhibition, and whether it is related to individual working memory capacity (WMC), and how speech-recognition-in-noise relates to WMC and inhibitory control. The results showed a significant difference between initiation and response inhibition, suggesting that the Hayling task taps cognitive activity responsible for executive control. Our findings also suggest that high verbal ability was associated with better performance in the Hayling task. We also present findings suggesting that individuals who perform well on tasks involving response inhibition, and WMC, also perform well on a speech-in-noise task. Our findings indicate that capacity to resist semantic interference can be used to predict performance on speech-in-noise tasks. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Information based universal feature extraction
NASA Astrophysics Data System (ADS)
Amiri, Mohammad; Brause, Rüdiger
2015-02-01
In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
NASA Astrophysics Data System (ADS)
Yu, Francis T. S.; Jutamulia, Suganda
2008-10-01
Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.
Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551
Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.
Semantic and episodic memory of music are subserved by distinct neural networks.
Platel, Hervé; Baron, Jean-Claude; Desgranges, Béatrice; Bernard, Frédéric; Eustache, Francis
2003-09-01
Numerous functional imaging studies have shown that retrieval from semantic and episodic memory is subserved by distinct neural networks. However, these results were essentially obtained with verbal and visuospatial material. The aim of this work was to determine the neural substrates underlying the semantic and episodic components of music using familiar and nonfamiliar melodic tunes. To study musical semantic memory, we designed a task in which the instruction was to judge whether or not the musical extract was felt as "familiar." To study musical episodic memory, we constructed two delayed recognition tasks, one containing only familiar and the other only nonfamiliar items. For each recognition task, half of the extracts (targets) were presented in the prior semantic task. The episodic and semantic tasks were to be contrasted by a comparison to two perceptive control tasks and to one another. Cerebral blood flow was assessed by means of the oxygen-15-labeled water injection method, using high-resolution PET. Distinct patterns of activations were found. First, regarding the episodic memory condition, bilateral activations of the middle and superior frontal gyri and precuneus (more prominent on the right side) were observed. Second, the semantic memory condition disclosed extensive activations in the medial and orbital frontal cortex bilaterally, the left angular gyrus, and predominantly the left anterior part of the middle temporal gyri. The findings from this study are discussed in light of the available neuropsychological data obtained in brain-damaged subjects and functional neuroimaging studies.
It Takes Time to Prime: Semantic Priming in the Ocular Lexical Decision Task
Hoedemaker, Renske S.; Gordon, Peter C.
2014-01-01
Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDT) was replaced with an eye-movement response through a sequence of three words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative LD on each word in the triplet. In Experiment 2, LD responses were delayed until all three letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, while limited during text reading as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of τ, meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases where the LD is difficult as indicated by longer response times. Compared to the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT. PMID:25181368
Emotional content enhances true but not false memory for categorized stimuli.
Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna
2013-04-01
Past research has shown that emotion enhances true memory, but that emotion can either increase or decrease false memory. Two theoretical possibilities-the distinctiveness of emotional stimuli and the conceptual relatedness of emotional content-have been implicated as being responsible for influencing both true and false memory for emotional content. In the present study, we sought to identify the mechanisms that underlie these mixed findings by equating the thematic relatedness of the study materials across each type of valence used (negative, positive, or neutral). In three experiments, categorically bound stimuli (e.g., funeral, pets, and office items) were used for this purpose. When the encoding task required the processing of thematic relatedness, a significant true-memory enhancement for emotional content emerged in recognition memory, but no emotional boost to false memory (exp. 1). This pattern persisted for true memory with a longer retention interval between study and test (24 h), and false recognition was reduced for emotional items (exp. 2). Finally, better recognition memory for emotional items once again emerged when the encoding task (arousal ratings) required the processing of the emotional aspect of the study items, with no emotional boost to false recognition (EXP. 3). Together, these findings suggest that when emotional and neutral stimuli are equivalently high in thematic relatedness, emotion continues to improve true memory, but it does not override other types of grouping to increase false memory.
ERIC Educational Resources Information Center
Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus
2010-01-01
The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…
Al-Marri, Faraj; Reza, Faruque; Begum, Tahamina; Hitam, Wan Hazabbah Wan; Jin, Goh Khean; Xiang, Jing
2017-10-25
Visual cognitive function is important to build up executive function in daily life. Perception of visual Number form (e.g., Arabic digit) and numerosity (magnitude of the Number) is of interest to cognitive neuroscientists. Neural correlates and the functional measurement of Number representations are complex occurrences when their semantic categories are assimilated with other concepts of shape and colour. Colour perception can be processed further to modulate visual cognition. The Ishihara pseudoisochromatic plates are one of the best and most common screening tools for basic red-green colour vision testing. However, there is a lack of study of visual cognitive function assessment using these pseudoisochromatic plates. We recruited 25 healthy normal trichromat volunteers and extended these studies using a 128-sensor net to record event-related EEG. Subjects were asked to respond by pressing Numbered buttons when they saw the Number and Non-number plates of the Ishihara colour vision test. Amplitudes and latencies of N100 and P300 event related potential (ERP) components were analysed from 19 electrode sites in the international 10-20 system. A brain topographic map, cortical activation patterns and Granger causation (effective connectivity) were analysed from 128 electrode sites. No major significant differences between N100 ERP components in either stimulus indicate early selective attention processing was similar for Number and Non-number plate stimuli, but Non-number plate stimuli evoked significantly higher amplitudes, longer latencies of the P300 ERP component with a slower reaction time compared to Number plate stimuli imply the allocation of attentional load was more in Non-number plate processing. A different pattern of asymmetric scalp voltage map was noticed for P300 components with a higher intensity in the left hemisphere for Number plate tasks and higher intensity in the right hemisphere for Non-number plate tasks. Asymmetric cortical activation and connectivity patterns revealed that Number recognition occurred in the occipital and left frontal areas where as the consequence was limited to the occipital area during the Non-number plate processing. Finally, the results displayed that the visual recognition of Numbers dissociates from the recognition of Non-numbers at the level of defined neural networks. Number recognition was not only a process of visual perception and attention, but it was also related to a higher level of cognitive function, that of language.
Early differential processing of material images: Evidence from ERP classification.
Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R
2014-06-24
Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.
Deletion of the GluA1 AMPA receptor subunit impairs recency-dependent object recognition memory
Sanderson, David J.; Hindley, Emma; Smeaton, Emily; Denny, Nick; Taylor, Amy; Barkus, Chris; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.
2011-01-01
Deletion of the GluA1 AMPA receptor subunit impairs short-term spatial recognition memory. It has been suggested that short-term recognition depends upon memory caused by the recent presentation of a stimulus that is independent of contextual–retrieval processes. The aim of the present set of experiments was to test whether the role of GluA1 extends to nonspatial recognition memory. Wild-type and GluA1 knockout mice were tested on the standard object recognition task and a context-independent recognition task that required recency-dependent memory. In a first set of experiments it was found that GluA1 deletion failed to impair performance on either of the object recognition or recency-dependent tasks. However, GluA1 knockout mice displayed increased levels of exploration of the objects in both the sample and test phases compared to controls. In contrast, when the time that GluA1 knockout mice spent exploring the objects was yoked to control mice during the sample phase, it was found that GluA1 deletion now impaired performance on both the object recognition and the recency-dependent tasks. GluA1 deletion failed to impair performance on a context-dependent recognition task regardless of whether object exposure in knockout mice was yoked to controls or not. These results demonstrate that GluA1 is necessary for nonspatial as well as spatial recognition memory and plays an important role in recency-dependent memory processes. PMID:21378100
Connor, L T; Balota, D A; Neely, J H
1992-05-01
Experiment 1 replicated Yaniv and Meyer's (1987) finding that lexical decision and episodic recognition performance was better for words previously yielding high-accessibility levels (a combination of feeling-of-knowing and tip-of-the-tongue ratings) in comparison with those yielding low-accessibility levels in a rare word definition task. Experiment 2 yielded the same pattern even though lexical decisions preceded accessibility estimates by a full week. Experiment 3 dismissed the possibility that the Experiment 2 results may have been due to a long-term influence from the lexical decision task to the rare word judgment task. These results support a model in which Ss (a) retrieve topic familiarity information in making accessibility estimates in the rare word definition task and (b) use this information to modulate lexical decision performance.
Unsupervised learning of digit recognition using spike-timing-dependent plasticity
Diehl, Peter U.; Cook, Matthew
2015-01-01
In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks. PMID:26941637
Insensitivity of visual short-term memory to irrelevant visual information.
Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud
2002-07-01
Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.
Global neural pattern similarity as a common basis for categorization and recognition memory.
Davis, Tyler; Xue, Gui; Love, Bradley C; Preston, Alison R; Poldrack, Russell A
2014-05-28
Familiarity, or memory strength, is a central construct in models of cognition. In previous categorization and long-term memory research, correlations have been found between psychological measures of memory strength and activation in the medial temporal lobes (MTLs), which suggests a common neural locus for memory strength. However, activation alone is insufficient for determining whether the same mechanisms underlie neural function across domains. Guided by mathematical models of categorization and long-term memory, we develop a theory and a method to test whether memory strength arises from the global similarity among neural representations. In human subjects, we find significant correlations between global similarity among activation patterns in the MTLs and both subsequent memory confidence in a recognition memory task and model-based measures of memory strength in a category learning task. Our work bridges formal cognitive theories and neuroscientific models by illustrating that the same global similarity computations underlie processing in multiple cognitive domains. Moreover, by establishing a link between neural similarity and psychological memory strength, our findings suggest that there may be an isomorphism between psychological and neural representational spaces that can be exploited to test cognitive theories at both the neural and behavioral levels. Copyright © 2014 the authors 0270-6474/14/347472-13$15.00/0.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
Investigating the anticipatory nature of pattern perception in sport.
Gorman, Adam D; Abernethy, Bruce; Farrow, Damian
2011-07-01
The aim of the present study was to examine the anticipatory nature of pattern perception in sport by using static and moving basketball patterns across three different display types. Participants of differing skill levels were included in order to determine whether the effects would be moderated by the knowledge and experience of the observer in the same manner reported previously for simple images. The results from a pattern recognition task showed that both expert and recreational participants were more likely to anticipate the next likely state of a pattern when it was presented as a moving video, but only the experts appeared to have the depth of understanding required to elicit the same anticipatory encoding for patterns presented as schematic images. The results extend those reported in previous research and provide further evidence of an anticipatory encoding in pattern perception for images containing complex, interrelated patterns.
Neural correlates of incidental memory in mild cognitive impairment: an fMRI study.
Mandzia, Jennifer L; McAndrews, Mary Pat; Grady, Cheryl L; Graham, Simon J; Black, Sandra E
2009-05-01
Behaviour and fMRI brain activation patterns were compared during encoding and recognition tasks in mild cognitive impairment (MCI) (n=14) and normal controls (NC) (n=14). Deep (natural vs. man-made) and shallow (color vs. black and white) decisions were made at encoding and pictures from each condition were presented for yes/no recognition 20 min later. MCI showed less inferior frontal activation during deep (left only) and superficial encoding (bilaterally) and in both medial temporal lobes (MTL). When performance was equivalent (recognition of words encoded superficially), MTL activation was similar for the two groups, but during recognition testing of deeply encoded items NC showed more activation in both prefrontal and left MTL region. In a region of interest analysis, the extent of activation during deep encoding in the parahippocampi bilaterally and in left hippocampus correlated with subsequent recognition accuracy for those items in controls but not in MCI, which may reflect the heterogeneity of activation responses in conjunction with different degrees of pathology burden and progression status in the MCI group.
Violent video game players and non-players differ on facial emotion recognition.
Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M
2016-01-01
Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly. © 2015 Wiley Periodicals, Inc.
Continuous Chinese sign language recognition with CNN-LSTM
NASA Astrophysics Data System (ADS)
Yang, Su; Zhu, Qing
2017-07-01
The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.
Processing Electromyographic Signals to Recognize Words
NASA Technical Reports Server (NTRS)
Jorgensen, C. C.; Lee, D. D.
2009-01-01
A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.
REM sleep and emotional face memory in typically-developing children and children with autism.
Tessier, Sophie; Lambert, Andréane; Scherzer, Peter; Jemel, Boutheina; Godbout, Roger
2015-09-01
Relationship between REM sleep and memory was assessed in 13 neurotypical and 13 children with Autistic Spectrum Disorder (ASD). A neutral/positive/negative face recognition task was administered the evening before (learning and immediate recognition) and the morning after (delayed recognition) sleep. The number of rapid eye movements (REMs), beta and theta EEG activity over the visual areas were measured during REM sleep. Compared to neurotypical children, children with ASD showed more theta activity and longer reaction time (RT) for correct responses in delayed recognition of neutral faces. Both groups showed a positive correlation between sleep and performance but different patterns emerged: in neurotypical children, accuracy for recalling neutral faces and overall RT improvement overnight was correlated with EEG activity and REMs; in children with ASD, overnight RT improvement for positive and negative faces correlated with theta and beta activity, respectively. These results suggest that neurotypical and children with ASD use different sleep-related brain networks to process faces. Copyright © 2015 Elsevier B.V. All rights reserved.
Hand gesture recognition in confined spaces with partial observability and occultation constraints
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2016-05-01
Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.
Effects of age, education, and sex on response bias in a recognition task.
Marquié, J C; Baracat, B
2000-09-01
This study examined age-related differences in decision criteria and the extent to which inconsistencies in earlier findings could be due to sampling artifacts, especially the underlying effects of educational level and sex. Male and female participants (N = 3,059) from 4 age groups (32, 42, 52, and 62 years) and a wide range of educational levels performed a word recognition task. Response bias was assessed with a nonparametric index derived from signal detection theory. The analyses revealed no age differences except for the most educated subjects, for whom increased age was associated with stricter decision criteria. Lower levels of education and men as compared with women were associated with a more conservative bias. Controlling for the level of sensitivity did not significantly change this pattern of results. This finding stresses the need for caution in generalizing age differences obtained from samples that are only partly representative or imbalanced with respect to education and sex.
Fine-grained recognition of plants from images.
Šulc, Milan; Matas, Jiří
2017-01-01
Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Sex differences in functional activation patterns revealed by increased emotion processing demands.
Hall, Geoffrey B C; Witelson, Sandra F; Szechtman, Henry; Nahmias, Claude
2004-02-09
Two [O(15)] PET studies assessed sex differences regional brain activation in the recognition of emotional stimuli. Study I revealed that the recognition of emotion in visual faces resulted in bilateral frontal activation in women, and unilateral right-sided activation in men. In study II, the complexity of the emotional face task was increased through tje addition of associated auditory emotional stimuli. Men again showed unilateral frontal activation, in this case to the left; whereas women did not show bilateral frontal activation, but showed greater limbic activity. These results suggest that when processing broader cross-modal emotional stimuli, men engage more in associative cognitive strategies while women draw more on primary emotional references.
Heger, Dominic; Herff, Christian; Schultz, Tanja
2014-01-01
In this paper, we show that multiple operations of the typical pattern recognition chain of an fNIRS-based BCI, including feature extraction and classification, can be unified by solving a convex optimization problem. We formulate a regularized least squares problem that learns a single affine transformation of raw HbO(2) and HbR signals. We show that this transformation can achieve competitive results in an fNIRS BCI classification task, as it significantly improves recognition of different levels of workload over previously published results on a publicly available n-back data set. Furthermore, we visualize the learned models and analyze their spatio-temporal characteristics.
Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun
2015-12-01
Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.
ERIC Educational Resources Information Center
Brooks, Brian E.; Cooper, Eric E.
2006-01-01
Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…
Positive and negative emotion enhances the processing of famous faces in a semantic judgment task.
Bate, Sarah; Haslam, Catherine; Hodgson, Timothy L; Jansari, Ashok; Gregory, Nicola; Kay, Janice
2010-01-01
Previous work has consistently reported a facilitatory influence of positive emotion in face recognition (e.g., D'Argembeau, Van der Linden, Comblain, & Etienne, 2003). However, these reports asked participants to make recognition judgments in response to faces, and it is unknown whether emotional valence may influence other stages of processing, such as at the level of semantics. Furthermore, other evidence suggests that negative rather than positive emotion facilitates higher level judgments when processing nonfacial stimuli (e.g., Mickley & Kensinger, 2008), and it is possible that negative emotion also influences latter stages of face processing. The present study addressed this issue, examining the influence of emotional valence while participants made semantic judgments in response to a set of famous faces. Eye movements were monitored while participants performed this task, and analyses revealed a reduction in information extraction for the faces of liked and disliked celebrities compared with those of emotionally neutral celebrities. Thus, in contrast to work using familiarity judgments, both positive and negative emotion facilitated processing in this semantic-based task. This pattern of findings is discussed in relation to current models of face processing. Copyright 2009 APA, all rights reserved.
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
2015-01-01
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Integrated Remote Sensing Modalities for Classification at a Legacy Test Site
NASA Astrophysics Data System (ADS)
Lee, D. J.; Anderson, D.; Craven, J.
2016-12-01
Detecting, locating, and characterizing suspected underground nuclear test sites is of interest to the worldwide nonproliferation monitoring community. Remote sensing provides both cultural and surface geological information over a large search area in a non-intrusive manner. We have characterized a legacy nuclear test site at the Nevada National Security Site (NNSS) using an aerial system based on RGB imagery, light detection and ranging, and hyperspectral imaging. We integrate these different remote sensing modalities to perform pattern recognition and classification tasks on the test site. These tasks include detecting cultural artifacts and exotic materials. We evaluate if the integration of different remote sensing modalities improves classification performance.
Automated location detection of injection site for preclinical stereotactic neurosurgery procedure
NASA Astrophysics Data System (ADS)
Abbaszadeh, Shiva; Wu, Hemmings C. H.
2017-03-01
Currently, during stereotactic neurosurgery procedures, the manual task of locating the proper area for needle insertion or implantation of electrode/cannula/optic fiber can be time consuming. The requirement of the task is to quickly and accurately find the location for insertion. In this study we investigate an automated method to locate the entry point of region of interest. This method leverages a digital image capture system, pattern recognition, and motorized stages. Template matching of known anatomical identifiable regions is used to find regions of interest (e.g. Bregma) in rodents. For our initial study, we tackle the problem of automatically detecting the entry point.
Perceptual and affective mechanisms in facial expression recognition: An integrative review.
Calvo, Manuel G; Nummenmaa, Lauri
2016-09-01
Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.
Sensory Contributions to Impaired Emotion Processing in Schizophrenia
Butler, Pamela D.; Abeles, Ilana Y.; Weiskopf, Nicole G.; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E.; Zemon, Vance; Loughead, James; Gur, Ruben C.; Javitt, Daniel C.
2009-01-01
Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective. PMID:19793797
Sensory contributions to impaired emotion processing in schizophrenia.
Butler, Pamela D; Abeles, Ilana Y; Weiskopf, Nicole G; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E; Zemon, Vance; Loughead, James; Gur, Ruben C; Javitt, Daniel C
2009-11-01
Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective.
Glucose enhancement of a facial recognition task in young adults.
Metzger, M M
2000-02-01
Numerous studies have reported that glucose administration enhances memory processes in both elderly and young adult subjects. Although these studies have utilized a variety of procedures and paradigms, investigations of both young and elderly subjects have typically used verbal tasks (word list recall, paragraph recall, etc.). In the present study, the effect of glucose consumption on a nonverbal, facial recognition task in young adults was examined. Lemonade sweetened with either glucose (50 g) or saccharin (23.7 mg) was consumed by college students (mean age of 21.1 years) 15 min prior to a facial recognition task. The task consisted of a familiarization phase in which subjects were presented with "target" faces, followed immediately by a recognition phase in which subjects had to identify the targets among a random array of familiar target and novel "distractor" faces. Statistical analysis indicated that there were no differences on hit rate (target identification) for subjects who consumed either saccharin or glucose prior to the test. However, further analyses revealed that subjects who consumed glucose committed significantly fewer false alarms and had (marginally) higher d-prime scores (a signal detection measure) compared to subjects who consumed saccharin prior to the test. These results parallel a previous report demonstrating glucose enhancement of a facial recognition task in probable Alzheimer's patients; however, this is believed to be the first demonstration of glucose enhancement for a facial recognition task in healthy, young adults.
[Explicit memory for type font of words in source monitoring and recognition tasks].
Hatanaka, Yoshiko; Fujita, Tetsuya
2004-02-01
We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-09-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on basic emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of complex emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general population control group (n = 24). Children with ASC performed lower than controls on the task. Using task scores, more than 87% of the participants were allocated to their group. This new test quantifies complex emotion and mental state recognition in life-like situations. Our findings reveal that children with ASC have residual difficulties in this aspect of empathy. The use of language-based compensatory strategies for emotion recognition is discussed.
Age-Related Differences in Listening Effort During Degraded Speech Recognition.
Ward, Kristina M; Shen, Jing; Souza, Pamela E; Grieco-Calub, Tina M
The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control.
It Takes Two–Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres
Bilalić, Merim; Kiesel, Andrea; Pohl, Carsten; Erb, Michael; Grodd, Wolfgang
2011-01-01
Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas. PMID:21283683
Measuring listening effort: driving simulator vs. simple dual-task paradigm
Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth
2014-01-01
Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599
Encoding: the keystone to efficient functioning of verbal short-term memory.
Barry, Johanna G; Sabisch, Beate; Friederici, Angela D; Brauer, Jens
2011-11-01
Verbal short-term memory (VSTM) is thought to play a critical role in language learning. It is indexed by the nonword repetition task where listeners are asked to repeat meaningless words like 'blonterstaping'. The present study investigated the effect on nonword repetition performance of differences in efficiency of functioning of some part of the neural architecture mediating VSTM. Hypotheses were stated within Baddeley and Hitch's (1974) multicomponent model of VSTM, with respect to regions of the brain known to be active during tasks tapping into VSTM. We were specifically interested in activations associated with the posterior planum temporale (Spt) which emerge during rehearsal since this region is hypothesized to be central to VTSM (Buchsbaum, Olsen, Koch, & Berman, 2005a). Participants performed a delayed reaction time task in the scanner which explicitly mimicked the three main stages of information-processing involved in VSTM (encoding, rehearsal, recall (here recognition)). The data for each stage were then convolved with scores from a separately measured nonword repetition task. Rather than observing a pattern of individual differences located to specific regions specialized for supporting VSTM, a dissociation in direction of correlation in overlapping regions of the brain was observed during encoding and recognition. Larger hemodynamic responses during encoding were associated with better nonword repetition, and vice versa during recognition. There was little evidence for a network of activations specialized for VSTM. Instead, the main correlations were observed in regions also known to be involved in long-term memory. It seems that individuals who are better at nonword repetition and hence at language learning, activate these regions more efficiently than poorer nonword-repeaters early after stimulus input. These observations are discussed with respect to various models proposed for explaining the phenomenon of VSTM. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Use of Biometrics within Sub-Saharan Refugee Communities
2013-12-01
fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is
ERIC Educational Resources Information Center
Mulligan, Neil W.; Besken, Miri; Peterson, Daniel
2010-01-01
Remember-Know (RK) and source memory tasks were designed to elucidate processes underlying memory retrieval. As part of more complex judgments, both tests produce a measure of old-new recognition, which is typically treated as equivalent to that derived from a standard recognition task. The present study demonstrates, however, that recognition…
The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information
ERIC Educational Resources Information Center
Liu, Chang Hong; Ward, James; Markall, Helena
2007-01-01
Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…
A Spiking Neural Network System for Robust Sequence Recognition.
Yu, Qiang; Yan, Rui; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2016-03-01
This paper proposes a biologically plausible network architecture with spiking neurons for sequence recognition. This architecture is a unified and consistent system with functional parts of sensory encoding, learning, and decoding. This is the first systematic model attempting to reveal the neural mechanisms considering both the upstream and the downstream neurons together. The whole system is a consistent temporal framework, where the precise timing of spikes is employed for information processing and cognitive computing. Experimental results show that the system is competent to perform the sequence recognition, being robust to noisy sensory inputs and invariant to changes in the intervals between input stimuli within a certain range. The classification ability of the temporal learning rule used in the system is investigated through two benchmark tasks that outperform the other two widely used learning rules for classification. The results also demonstrate the computational power of spiking neurons over perceptrons for processing spatiotemporal patterns. In summary, the system provides a general way with spiking neurons to encode external stimuli into spatiotemporal spikes, to learn the encoded spike patterns with temporal learning rules, and to decode the sequence order with downstream neurons. The system structure would be beneficial for developments in both hardware and software.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Rotation-invariant neural pattern recognition system with application to coin recognition.
Fukumi, M; Omatu, S; Takeda, F; Kosaka, T
1992-01-01
In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to distinguish between a 500 yen coin and a 500 won coin. The results show that the approach works well for variable rotation pattern recognition.
Does the presence of priming hinder subsequent recognition or recall performance?
Stark, Shauna M; Gordon, Barry; Stark, Craig E L
2008-02-01
Declarative and non-declarative memories are thought be supported by two distinct memory systems that are often posited not to interact. However, Wagner, Maril, and Schacter (2000a) reported that at the time priming was assessed, greater behavioural and neural priming was associated with lower levels of subsequent recognition memory, demonstrating an interaction between declarative and non-declarative memory. We examined this finding using a similar paradigm, in which participants made the same or different semantic word judgements following a short or long lag and subsequent memory test. We found a similar overall pattern of results, with greater behavioural priming associated with a decrease in recognition and recall performance. However, neither various within-participant nor various between-participant analyses revealed significant correlations between priming and subsequent memory performance. These data suggest that both lag and task have effects on priming and declarative memory performance, but that they are largely independent and occur in parallel.
Neuroscience-Enabled Complex Visual Scene Understanding
2012-04-12
some cases, it is hard to precisely say where or what we are looking at since a complex task governs eye fixations, for example in driving. While in...another objects ( say a door) can be resolved using the prior information about the scene. This knowledge can be provided from gist models, such as one...separation and combination of class-dependent features for handwriting recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 10, pp. 1089
The Role of Embodiment and Individual Empathy Levels in Gesture Comprehension.
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
2017-01-01
Research suggests that the action-observation network is involved in both emotional-embodiment (empathy) and action-embodiment (imitation) mechanisms. Here we tested whether empathy modulates action-embodiment, hypothesizing that restricting imitation abilities will impair performance in a hand gesture comprehension task. Moreover, we hypothesized that empathy levels will modulate the imitation restriction effect. One hundred twenty participants with a range of empathy scores performed gesture comprehension under restricted and unrestricted hand conditions. Empathetic participants performed better under the unrestricted compared to the restricted condition, and compared to the low empathy participants. Remarkably however, the latter showed the exactly opposite pattern and performed better under the restricted condition. This pattern was not found in a facial expression recognition task. The selective interaction of embodiment restriction and empathy suggests that empathy modulates the way people employ embodiment in gesture comprehension. We discuss the potential of embodiment-induced therapy to improve empathetic abilities in individuals with low empathy.
Learning Rotation-Invariant Local Binary Descriptor.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2017-08-01
In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.
Neural correlates of virtual route recognition in congenital blindness.
Kupers, Ron; Chebat, Daniel R; Madsen, Kristoffer H; Paulson, Olaf B; Ptito, Maurice
2010-07-13
Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.
Profile of Executive and Memory Function Associated with Amphetamine and Opiate Dependence
Ersche, Karen D; Clark, Luke; London, Mervyn; Robbins, Trevor W; Sahakian, Barbara J
2007-01-01
Cognitive function was assessed in chronic drug users on neurocognitive measures of executive and memory function. Current amphetamine users were contrasted with current opiate users, and these two groups were compared with former users of these substances (abstinent for at least one year). Four groups of participants were recruited: amphetamine-dependent individuals, opiate-dependent individuals, former users of amphetamines, and/or opiates and healthy non-drug taking controls. Participants were administered the Tower of London (TOL) planning task and the 3D-IDED attentional set-shifting task to assess executive function, and Paired Associates Learning and Delayed Pattern Recognition Memory tasks to assess visual memory function. The three groups of substance users showed significant impairments on TOL planning, Pattern Recognition Memory and Paired Associates Learning. Current amphetamine users displayed a greater degree of impairment than current opiate users. Consistent with previous research showing that healthy men are performing better on visuo-spatial tests than women, our male controls remembered significantly more paired associates than their female counterparts. This relationship was reversed in drug users. While performance of female drug users was normal, male drug users showed significant impairment compared to both their female counterparts and male controls. There was no difference in performance between current and former drug users. Neither years of drug abuse nor years of drug abstinence were associated with performance. Chronic drug users display pronounced neuropsychological impairment in the domains of executive and memory function. Impairment persists after several years of drug abstinence and may reflect neuropathology in frontal and temporal cortices. PMID:16160707
Profiles of cognitive dysfunction in chronic amphetamine and heroin abusers.
Ornstein, T J; Iddon, J L; Baldacchino, A M; Sahakian, B J; London, M; Everitt, B J; Robbins, T W
2000-08-01
Groups of subjects whose primary drug of abuse was amphetamine or heroin were compared, together with age- and IQ-matched control subjects. The study consisted of a neuropsychological test battery which included both conventional tests and also computerised tests of recognition memory, spatial working memory, planning, sequence generation, visual discrimination learning, and attentional set-shifting. Many of these tests have previously been shown to be sensitive to cortical damage (including selective lesions of the temporal or frontal lobes) and to cognitive deficits in dementia, basal ganglia disease, and neuropsychiatric disorder. Qualitative differences, as well as some commonalities, were found in the profile of cognitive impairment between the two groups. The chronic amphetamine abusers were significantly impaired in performance on the extra-dimensional shift task (a core component of the Wisconsin Card Sort Test) whereas in contrast, the heroin abusers were impaired in learning the normally easier intra-dimensional shift component. Both groups were impaired in some of tests of spatial working memory. However, the amphetamine group, unlike the heroin group, were not deficient in an index of strategic performance on this test. The heroin group failed to show significant improvement between two blocks of a sequence generation task after training and additionally exhibited more perseverative behavior on this task. The two groups were profoundly, but equivalently impaired on a test of pattern recognition memory sensitive to temporal lobe dysfunction. These results indicate that chronic drug use may lead to distinct patterns of cognitive impairment that may be associated with dysfunction of different components of cortico-striatal circuitry.
Real-time simultaneous and proportional myoelectric control using intramuscular EMG
Kuiken, Todd A; Hargrove, Levi J
2014-01-01
Objective Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main Results Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts' Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts' Law tasks with high levels of path efficiency. Significance These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control. PMID:25394366
Real-time simultaneous and proportional myoelectric control using intramuscular EMG
NASA Astrophysics Data System (ADS)
Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.
2014-12-01
Objective. Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach. We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main results. Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts’ Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts’ Law tasks with high levels of path efficiency. Significance. These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control.
Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R
2011-11-01
Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.
Holdstock, J S; Mayes, A R; Roberts, N; Cezayirli, E; Isaac, C L; O'Reilly, R C; Norman, K A
2002-01-01
The claim that recognition memory is spared relative to recall after focal hippocampal damage has been disputed in the literature. We examined this claim by investigating object and object-location recall and recognition memory in a patient, YR, who has adult-onset selective hippocampal damage. Our aim was to identify the conditions under which recognition was spared relative to recall in this patient. She showed unimpaired forced-choice object recognition but clearly impaired recall, even when her control subjects found the object recognition task to be numerically harder than the object recall task. However, on two other recognition tests, YR's performance was not relatively spared. First, she was clearly impaired at an equivalently difficult yes/no object recognition task, but only when targets and foils were very similar. Second, YR was clearly impaired at forced-choice recognition of object-location associations. This impairment was also unrelated to difficulty because this task was no more difficult than the forced-choice object recognition task for control subjects. The clear impairment of yes/no, but not of forced-choice, object recognition after focal hippocampal damage, when targets and foils are very similar, is predicted by the neural network-based Complementary Learning Systems model of recognition. This model postulates that recognition is mediated by hippocampally dependent recollection and cortically dependent familiarity; thus hippocampal damage should not impair item familiarity. The model postulates that familiarity is ineffective when very similar targets and foils are shown one at a time and subjects have to identify which items are old (yes/no recognition). In contrast, familiarity is effective in discriminating which of similar targets and foils, seen together, is old (forced-choice recognition). Independent evidence from the remember/know procedure also indicates that YR's familiarity is normal. The Complementary Learning Systems model can also accommodate the clear impairment of forced-choice object-location recognition memory if it incorporates the view that the most complete convergence of spatial and object information, represented in different cortical regions, occurs in the hippocampus.
Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike
2016-02-01
Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed. © 2015 The British Psychological Society.
NASA Astrophysics Data System (ADS)
To, Cuong; Pham, Tuan D.
2010-01-01
In machine learning, pattern recognition may be the most popular task. "Similar" patterns identification is also very important in biology because first, it is useful for prediction of patterns associated with disease, for example cancer tissue (normal or tumor); second, similarity or dissimilarity of the kinetic patterns is used to identify coordinately controlled genes or proteins involved in the same regulatory process. Third, similar genes (proteins) share similar functions. In this paper, we present an algorithm which uses genetic programming to create decision tree for binary classification problem. The application of the algorithm was implemented on five real biological databases. Base on the results of comparisons with well-known methods, we see that the algorithm is outstanding in most of cases.
Task difficulty moderates the revelation effect.
Aßfalg, André; Currie, Devon; Bernstein, Daniel M
2017-05-01
Tasks that precede a recognition probe induce a more liberal response criterion than do probes without tasks-the "revelation effect." For example, participants are more likely to claim that a stimulus is familiar directly after solving an anagram, relative to a condition without an anagram. Revelation effect hypotheses disagree whether hard preceding tasks should produce a larger revelation effect than easy preceding tasks. Although some studies have shown that hard tasks increase the revelation effect as compared to easy tasks, these studies suffered from a confound of task difficulty and task presence. Conversely, other studies have shown that the revelation effect is independent of task difficulty. In the present study, we used new task difficulty manipulations to test whether hard tasks produce larger revelation effects than easy tasks. Participants (N = 464) completed hard or easy preceding tasks, including anagrams (Exps. 1 and 2) and the typing of specific arrow key sequences (Exps. 3-6). With sample sizes typical of revelation effect experiments, the effect sizes of task difficulty on the revelation effect varied considerably across experiments. Despite this variability, a consistent data pattern emerged: Hard tasks produced larger revelation effects than easy tasks. Although the present study falsifies certain revelation effect hypotheses, the general vagueness of revelation effect hypotheses remains.
Clustered Multi-Task Learning for Automatic Radar Target Recognition
Li, Cong; Bao, Weimin; Xu, Luping; Zhang, Hua
2017-01-01
Model training is a key technique for radar target recognition. Traditional model training algorithms in the framework of single task leaning ignore the relationships among multiple tasks, which degrades the recognition performance. In this paper, we propose a clustered multi-task learning, which can reveal and share the multi-task relationships for radar target recognition. To further make full use of these relationships, the latent multi-task relationships in the projection space are taken into consideration. Specifically, a constraint term in the projection space is proposed, the main idea of which is that multiple tasks within a close cluster should be close to each other in the projection space. In the proposed method, the cluster structures and multi-task relationships can be autonomously learned and utilized in both of the original and projected space. In view of the nonlinear characteristics of radar targets, the proposed method is extended to a non-linear kernel version and the corresponding non-linear multi-task solving method is proposed. Comprehensive experimental studies on simulated high-resolution range profile dataset and MSTAR SAR public database verify the superiority of the proposed method to some related algorithms. PMID:28953267
When anger dominates the mind: Increased motor corticospinal excitability in the face of threat
Hortensius, Ruud
2016-01-01
Abstract Threat demands fast and adaptive reactions that are manifested at the physiological, behavioral, and phenomenological level and are responsive to the direction of threat and its severity for the individual. Here, we investigated the effects of threat directed toward or away from the observer on motor corticospinal excitability and explicit recognition. Sixteen healthy right‐handed volunteers completed a transcranial magnetic stimulation (TMS) task and a separate three‐alternative forced‐choice emotion recognition task. Single‐pulse TMS to the left primary motor cortex was applied to measure motor evoked potentials from the right abductor pollicis brevis in response to dynamic angry, fearful, and neutral bodily expressions with blurred faces directed toward or away from the observer. Results showed that motor corticospinal excitability increased independent of direction of anger compared with fear and neutral. In contrast, anger was better recognized when directed toward the observer compared with when directed away from the observer, while the opposite pattern was found for fear. The present results provide evidence for the differential effects of threat direction on explicit recognition and motor corticospinal excitability. In the face of threat, motor corticospinal excitability increases independently of the direction of anger, indicative of the importance of more automatic reactions to threat. PMID:27325519
Accelerometry-Based Activity Recognition and Assessment in Rheumatic and Musculoskeletal Diseases.
Billiet, Lieven; Swinnen, Thijs Willem; Westhovens, Rene; de Vlam, Kurt; Van Huffel, Sabine
2016-12-16
One of the important aspects to be considered in rheumatic and musculoskeletal diseases is the patient's activity capacity (or performance), defined as the ability to perform a task. Currently, it is assessed by physicians or health professionals mainly by means of a patient-reported questionnaire, sometimes combined with the therapist's judgment on performance-based tasks. This work introduces an approach to assess the activity capacity at home in a more objective, yet interpretable way. It offers a pilot study on 28 patients suffering from axial spondyloarthritis (axSpA) to demonstrate its efficacy. Firstly, a protocol is introduced to recognize a limited set of six transition activities in the home environment using a single accelerometer. To this end, a hierarchical classifier with the rejection of non-informative activity segments has been developed drawing on both direct pattern recognition and statistical signal features. Secondly, the recognized activities should be assessed, similarly to the scoring performed by patients themselves. This is achieved through the interval coded scoring (ICS) system, a novel method to extract an interpretable scoring system from data. The activity recognition reaches an average accuracy of 93.5%; assessment is currently 64.3% accurate. These results indicate the potential of the approach; a next step should be its validation in a larger patient study.
Duval, Elizabeth R; Garfinkel, Sarah N; Swain, James E; Evans, Gary W; Blackburn, Erika K; Angstadt, Mike; Sripada, Chandra S; Liberzon, Israel
2017-02-01
Childhood poverty is a risk factor for poorer cognitive performance during childhood and adulthood. While evidence linking childhood poverty and memory deficits in adulthood has been accumulating, underlying neural mechanisms are unknown. To investigate neurobiological links between childhood poverty and adult memory performance, we used functional magnetic resonance imaging (fMRI) during a visuospatial memory task in healthy young adults with varying income levels during childhood. Participants were assessed at age 9 and followed through young adulthood to assess income and related factors. During adulthood, participants completed a visuospatial memory task while undergoing MRI scanning. Patterns of neural activation, as well as memory recognition for items, were assessed to examine links between brain function and memory performance as it relates to childhood income. Our findings revealed associations between item recognition, childhood income level, and hippocampal activation. Specifically, the association between hippocampal activation and recognition accuracy varied as a function of childhood poverty, with positive associations at higher income levels, and negative associations at lower income levels. These prospective findings confirm previous retrospective results detailing deleterious effects of childhood poverty on adult memory performance. In addition, for the first time, we identify novel neurophysiological correlates of these deficits localized to hippocampus activation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sources of Interference in Recognition Testing
ERIC Educational Resources Information Center
Annis, Jeffrey; Malmberg, Kenneth J.; Criss, Amy H.; Shiffrin, Richard M.
2013-01-01
Recognition memory accuracy is harmed by prior testing (a.k.a., output interference [OI]; Tulving & Arbuckle, 1966). In several experiments, we interpolated various tasks between recognition test trials. The stimuli and the tasks were more similar (lexical decision [LD] of words and nonwords) or less similar (gender identification of male and…
Image analysis in cytology: DNA-histogramming versus cervical smear prescreening.
Bengtsson, E W; Nordin, B
1993-01-01
The visual inspection of cellular specimens and histological sections through a light microscope plays an important role in clinical medicine and biomedical research. The human visual system is very good at the recognition of various patterns but less efficient at quantitative assessment of these patterns. Some samples are prepared in great numbers, most notably the screening for cervical cancer, the so-called PAP-smears, which results in hundreds of millions of samples each year, creating a tedious mass inspection task. Numerous attempts have been made over the last 40 years to create systems that solve these two tasks, the quantitative supplement to the human visual system and the automation of mass screening. The most difficult task, the total automation, has received the greatest attention with many large scale projects over the decades. In spite of all these efforts, still no generally accepted automated prescreening device exists on the market. The main reason for this failure is the great pattern recognition capabilities needed to distinguish between cancer cells and all other kinds of objects found in the specimens: cellular clusters, debris, degenerate cells, etc. Improved algorithms, the ever-increasing processing power of computers and progress in biochemical specimen preparation techniques make it likely that eventually useful automated prescreening systems will become available. Meanwhile, much less effort has been put into the development of interactive cell image analysis systems. Still, some such systems have been developed and put into use at thousands of laboratories worldwide. In these the human pattern recognition capability is used to select the fields and objects that are to be analysed while the computational power of the computer is used for the quantitative analysis of cellular DNA content or other relevant markers. Numerous studies have shown that the quantitative information about the distribution of cellular DNA content is of prognostic significance in many types of cancer. Several laboratories are therefore putting these techniques into routine clinical use. The more advanced systems can also study many other markers and cellular features, some known to be of clinical interest, others useful in research. The advances in computer technology are making these systems more generally available through decreasing cost, increasing computational power and improved user interfaces. We have been involved in research and development of both automated and interactive cell analysis systems during the last 20 years. Here some experiences and conclusions from this work will be presented as well as some predictions about what can be expected in the near future.
Chatterjee, Monita; Peng, Shu-Chen
2008-01-01
Fundamental frequency (F0) processing by cochlear implant (CI) listeners was measured using a psychophysical task and a speech intonation recognition task. Listeners' Weber fractions for modulation frequency discrimination were measured using an adaptive, 3-interval, forced-choice paradigm: stimuli were presented through a custom research interface. In the speech intonation recognition task, listeners were asked to indicate whether resynthesized bisyllabic words, when presented in the free field through the listeners' everyday speech processor, were question-like or statement-like. The resynthesized tokens were systematically manipulated to have different initial-F0s to represent male vs. female voices, and different F0 contours (i.e. falling, flat, and rising) Although the CI listeners showed considerable variation in performance on both tasks, significant correlations were observed between the CI listeners' sensitivity to modulation frequency in the psychophysical task and their performance in intonation recognition. Consistent with their greater reliance on temporal cues, the CI listeners' performance in the intonation recognition task was significantly poorer with the higher initial-F0 stimuli than with the lower initial-F0 stimuli. Similar results were obtained with normal hearing listeners attending to noiseband-vocoded CI simulations with reduced spectral resolution.
Chatterjee, Monita; Peng, Shu-Chen
2008-01-01
Fundamental frequency (F0) processing by cochlear implant (CI) listeners was measured using a psychophysical task and a speech intonation recognition task. Listeners’ Weber fractions for modulation frequency discrimination were measured using an adaptive, 3-interval, forced-choice paradigm: stimuli were presented through a custom research interface. In the speech intonation recognition task, listeners were asked to indicate whether resynthesized bisyllabic words, when presented in the free field through the listeners’ everyday speech processor, were question-like or statement-like. The resynthesized tokens were systematically manipulated to have different initial F0s to represent male vs. female voices, and different F0 contours (i.e., falling, flat, and rising) Although the CI listeners showed considerable variation in performance on both tasks, significant correlations were observed between the CI listeners’ sensitivity to modulation frequency in the psychophysical task and their performance in intonation recognition. Consistent with their greater reliance on temporal cues, the CI listeners’ performance in the intonation recognition task was significantly poorer with the higher initial-F0 stimuli than with the lower initial-F0 stimuli. Similar results were obtained with normal hearing listeners attending to noiseband-vocoded CI simulations with reduced spectral resolution. PMID:18093766
Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin
2015-09-01
Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs. © 2014 The British Psychological Society.
Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227
Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan
2015-01-01
To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures.
Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.
Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat
2012-01-01
One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.
Learning and recognition of tactile temporal sequences by mice and humans
Bale, Michael R; Bitzidou, Malamati; Pitas, Anna; Brebner, Leonie S; Khazim, Lina; Anagnou, Stavros T; Stevenson, Caitlin D; Maravall, Miguel
2017-01-01
The world around us is replete with stimuli that unfold over time. When we hear an auditory stream like music or speech or scan a texture with our fingertip, physical features in the stimulus are concatenated in a particular order. This temporal patterning is critical to interpreting the stimulus. To explore the capacity of mice and humans to learn tactile sequences, we developed a task in which subjects had to recognise a continuous modulated noise sequence delivered to whiskers or fingertips, defined by its temporal patterning over hundreds of milliseconds. GO and NO-GO sequences differed only in that the order of their constituent noise modulation segments was temporally scrambled. Both mice and humans efficiently learned tactile sequences. Mouse sequence recognition depended on detecting transitions in noise amplitude; animals could base their decision on the earliest information available. Humans appeared to use additional cues, including the duration of noise modulation segments. DOI: http://dx.doi.org/10.7554/eLife.27333.001 PMID:28812976
NASA Astrophysics Data System (ADS)
Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup
2016-06-01
Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.
High resolution ultrasonic spectroscopy system for nondestructive evaluation
NASA Technical Reports Server (NTRS)
Chen, C. H.
1991-01-01
With increased demand for high resolution ultrasonic evaluation, computer based systems or work stations become essential. The ultrasonic spectroscopy method of nondestructive evaluation (NDE) was used to develop a high resolution ultrasonic inspection system supported by modern signal processing, pattern recognition, and neural network technologies. The basic system which was completed consists of a 386/20 MHz PC (IBM AT compatible), a pulser/receiver, a digital oscilloscope with serial and parallel communications to the computer, an immersion tank with motor control of X-Y axis movement, and the supporting software package, IUNDE, for interactive ultrasonic evaluation. Although the hardware components are commercially available, the software development is entirely original. By integrating signal processing, pattern recognition, maximum entropy spectral analysis, and artificial neural network functions into the system, many NDE tasks can be performed. The high resolution graphics capability provides visualization of complex NDE problems. The phase 3 efforts involve intensive marketing of the software package and collaborative work with industrial sectors.
NASA Astrophysics Data System (ADS)
Maas, Christian; Schmalzl, Jörg
2013-08-01
Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.
NASA Technical Reports Server (NTRS)
Moebes, T. A.
1994-01-01
To locate the accessory pathway(s) in preexicitation syndromes, epicardial and endocardial ventricular mapping is performed during anterograde ventricular activation via accessory pathway(s) from data originally received in signal form. As the number of channels increases, it is pertinent that more automated detection of coherent/incoherent signals is achieved as well as the prediction and prognosis of ventricular tachywardia (VT). Today's computers and computer program algorithms are not good in simple perceptual tasks such as recognizing a pattern or identifying a sound. This discrepancy, among other things, has been a major motivating factor in developing brain-based, massively parallel computing architectures. Neural net paradigms have proven to be effective at pattern recognition tasks. In signal processing, the picking of coherent/incoherent signals represents a pattern recognition task for computer systems. The picking of signals representing the onset ot VT also represents such a computer task. We attacked this problem by defining four signal attributes for each potential first maximal arrival peak and one signal attribute over the entire signal as input to a back propagation neural network. One attribute was the predicted amplitude value after the maximum amplitude over a data window. Then, by using a set of known (user selected) coherent/incoherent signals, and signals representing the onset of VT, we trained the back propagation network to recognize coherent/incoherent signals, and signals indicating the onset of VT. Since our output scheme involves a true or false decision, and since the output unit computes values between 0 and 1, we used a Fuzzy Arithmetic approach to classify data as coherent/incoherent signals. Furthermore, a Mean-Square Error Analysis was used to determine system stability. The neural net based picking coherent/incoherent signal system achieved high accuracy on picking coherent/incoherent signals on different patients. The system also achieved a high accuracy of picking signals which represent the onset of VT, that is, VT immediately followed these signals. A special binary representation of the input and output data allowed the neural network to train very rapidly as compared to another standard decimal or normalized representations of the data.
Czerniawski, Jennifer; Miyashita, Teiko; Lewandowski, Gail; Guzowski, John F.
2014-01-01
Neuroinflammation is implicated in impairments in neuronal function and cognition that arise with aging, trauma, and/or disease. Therefore, understanding the underlying basis of the effect of immune system activation on neural function could lead to therapies for treating cognitive decline. Although neuroinflammation is widely thought to preferentially impair hippocampus-dependent memory, data on the effects of cytokines on cognition are mixed. One possible explanation for these inconsistent results is that cytokines may disrupt specific neural processes underlying some forms of memory but not others. In an earlier study, we tested the effect of systemic administration of bacterial lipopolysaccharide (LPS) on retrieval of hippocampus-dependent context memory and neural circuit function in CA3 and CA1 (Czerniawski and Guzowski, 2014). Paralleling impairment in context discrimination memory, we observed changes in neural circuit function consistent with disrupted pattern separation function. In the current study we tested the hypothesis that acute neuroinflammation selectively disrupts memory retrieval in tasks requiring hippocampal pattern separation processes. Male Sprague-Dawley rats given LPS systemically prior to testing exhibited intact performance in tasks that do not require hippocampal pattern separation processes: novel object recognition and spatial memory in the water maze. By contrast, memory retrieval in a task thought to require hippocampal pattern separation, context-object discrimination, was strongly impaired in LPS-treated rats in the absence of any gross effects on exploratory activity or motivation. These data show that LPS administration does not impair memory retrieval in all hippocampus-dependent tasks, and support the hypothesis that acute neuroinflammation impairs context discrimination memory via disruption of pattern separation processes in hippocampus. PMID:25451612
Czerniawski, Jennifer; Miyashita, Teiko; Lewandowski, Gail; Guzowski, John F
2015-02-01
Neuroinflammation is implicated in impairments in neuronal function and cognition that arise with aging, trauma, and/or disease. Therefore, understanding the underlying basis of the effect of immune system activation on neural function could lead to therapies for treating cognitive decline. Although neuroinflammation is widely thought to preferentially impair hippocampus-dependent memory, data on the effects of cytokines on cognition are mixed. One possible explanation for these inconsistent results is that cytokines may disrupt specific neural processes underlying some forms of memory but not others. In an earlier study, we tested the effect of systemic administration of bacterial lipopolysaccharide (LPS) on retrieval of hippocampus-dependent context memory and neural circuit function in CA3 and CA1 (Czerniawski and Guzowski, 2014). Paralleling impairment in context discrimination memory, we observed changes in neural circuit function consistent with disrupted pattern separation function. In the current study we tested the hypothesis that acute neuroinflammation selectively disrupts memory retrieval in tasks requiring hippocampal pattern separation processes. Male Sprague-Dawley rats given LPS systemically prior to testing exhibited intact performance in tasks that do not require hippocampal pattern separation processes: novel object recognition and spatial memory in the water maze. By contrast, memory retrieval in a task thought to require hippocampal pattern separation, context-object discrimination, was strongly impaired in LPS-treated rats in the absence of any gross effects on exploratory activity or motivation. These data show that LPS administration does not impair memory retrieval in all hippocampus-dependent tasks, and support the hypothesis that acute neuroinflammation impairs context discrimination memory via disruption of pattern separation processes in hippocampus. Copyright © 2014 Elsevier Inc. All rights reserved.
Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.
Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto
2015-08-06
Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.
Using eye movements as an index of implicit face recognition in autism spectrum disorder.
Hedley, Darren; Young, Robyn; Brewer, Neil
2012-10-01
Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-01-01
Background Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Methods Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Results Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = −0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = −0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Conclusions Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns. PMID:25642389
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-02-01
Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = -0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = -0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns.
Recall and recognition of verbal paired associates in early Alzheimer's disease.
Lowndes, G J; Saling, M M; Ames, D; Chiu, E; Gonzalez, L M; Savage, G R
2008-07-01
The primary impairment in early Alzheimer's disease (AD) is encoding/consolidation, resulting from medial temporal lobe (MTL) pathology. AD patients perform poorly on cued-recall paired associate learning (PAL) tasks, which assess the ability of the MTLs to encode relational memory. Since encoding and retrieval processes are confounded within performance indexes on cued-recall PAL, its specificity for AD is limited. Recognition paradigms tend to show good specificity for AD, and are well tolerated, but are typically less sensitive than recall tasks. Associate-recognition is a novel PAL task requiring a combination of recall and recognition processes. We administered a verbal associate-recognition test and cued-recall analogue to 22 early AD patients and 55 elderly controls to compare their ability to discriminate these groups. Both paradigms used eight arbitrarily related word pairs (e.g., pool-teeth) with varying degrees of imageability. Associate-recognition was equally effective as the cued-recall analogue in discriminating the groups, and logistic regression demonstrated classification rates by both tasks were equivalent. These preliminary findings provide support for the clinical value of this recognition tool. Conceptually it has potential for greater specificity in informing neuropsychological diagnosis of AD in clinical samples but this requires further empirical support.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Bizot, Jean-Charles; Herpin, Alexandre; Pothion, Stéphanie; Pirot, Sylvain; Trovero, Fabrice; Ollat, Hélène
2005-07-01
The effect of a sulbutiamine chronic treatment on memory was studied in rats with a spatial delayed-non-match-to-sample (DNMTS) task in a radial maze and a two trial object recognition task. After completion of training in the DNMTS task, animals were subjected for 9 weeks to daily injections of either saline or sulbutiamine (12.5 or 25 mg/kg). Sulbutiamine did not modify memory in the DNMTS task but improved it in the object recognition task. Dizocilpine, impaired both acquisition and retention of the DNMTS task in the saline-treated group, but not in the two sulbutiamine-treated groups, suggesting that sulbutiamine may counteract the amnesia induced by a blockade of the N-methyl-D-aspartate glutamate receptors. Taken together, these results are in favor of a beneficial effect of sulbutiamine on working and episodic memory.
Iannaccone, Reto; Hauser, Tobias U; Ball, Juliane; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia
2015-10-01
Attention-deficit/hyperactivity disorder (ADHD) is a common disabling psychiatric disorder associated with consistent deficits in error processing, inhibition and regionally decreased grey matter volumes. The diagnosis is based on clinical presentation, interviews and questionnaires, which are to some degree subjective and would benefit from verification through biomarkers. Here, pattern recognition of multiple discriminative functional and structural brain patterns was applied to classify adolescents with ADHD and controls. Functional activation features in a Flanker/NoGo task probing error processing and inhibition along with structural magnetic resonance imaging data served to predict group membership using support vector machines (SVMs). The SVM pattern recognition algorithm correctly classified 77.78% of the subjects with a sensitivity and specificity of 77.78% based on error processing. Predictive regions for controls were mainly detected in core areas for error processing and attention such as the medial and dorsolateral frontal areas reflecting deficient processing in ADHD (Hart et al., in Hum Brain Mapp 35:3083-3094, 2014), and overlapped with decreased activations in patients in conventional group comparisons. Regions more predictive for ADHD patients were identified in the posterior cingulate, temporal and occipital cortex. Interestingly despite pronounced univariate group differences in inhibition-related activation and grey matter volumes the corresponding classifiers failed or only yielded a poor discrimination. The present study corroborates the potential of task-related brain activation for classification shown in previous studies. It remains to be clarified whether error processing, which performed best here, also contributes to the discrimination of useful dimensions and subtypes, different psychiatric disorders, and prediction of treatment success across studies and sites.
Social Recognition Memory Requires Two Stages of Protein Synthesis in Mice
ERIC Educational Resources Information Center
Wolf, Gerald; Engelmann, Mario; Richter, Karin
2005-01-01
Olfactory recognition memory was tested in adult male mice using a social discrimination task. The testing was conducted to begin to characterize the role of protein synthesis and the specific brain regions associated with activity in this task. Long-term olfactory recognition memory was blocked when the protein synthesis inhibitor anisomycin was…
Lobb, M L; Stern, J A
1986-08-01
Sequential patterns of eye and eyelid motion were identified in seven subjects performing a modified serial probe recognition task under drowsy conditions. Using simultaneous EOG and video recordings, eyelid motion was divided into components above, within, and below the pupil and the durations in sequence were recorded. A serial probe recognition task was modified to allow for distinguishing decision errors from attention errors. Decision errors were found to be more frequent following a downward shift in the gaze angle which the eyelid closing sequence was reduced from a five element to a three element sequence. The velocity of the eyelid moving over the pupil during decision errors was slow in the closing and fast in the reopening phase, while on decision correct trials it was fast in closing and slower in reopening. Due to the high variability of eyelid motion under drowsy conditions these findings were only marginally significant. When a five element blink occurred, the velocity of the lid over pupil motion component of these endogenous eye blinks was significantly faster on decision correct than on decision error trials. Furthermore, the highly variable, long duration closings associated with the decision response produced slow eye movements in the horizontal plane (SEM) which were more frequent and significantly longer in duration on decision error versus decision correct responses.
Age-related differences in listening effort during degraded speech recognition
Ward, Kristina M.; Shen, Jing; Souza, Pamela E.; Grieco-Calub, Tina M.
2016-01-01
Objectives The purpose of the current study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design Twenty-five younger adults (18–24 years) and twenty-one older adults (56–82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants’ responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners’ performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (baseline vs. dual task); and (3) per group (younger vs. older adults). Results Speech recognition declined with increasing spectral degradation for both younger and older adults when they performed the task in isolation or concurrently with the visual monitoring task. Older adults were slower and less accurate than younger adults on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared to single-task performance, older adults experienced greater declines in secondary-task accuracy, but not reaction time, than younger adults. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions Older adults experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than younger adults. These findings are interpreted as suggesting that older listeners expended greater listening effort than younger listeners, and may be partially attributed to age-related differences in executive control. PMID:27556526
Within-person adaptivity in frugal judgments from memory.
Filevich, Elisa; Horn, Sebastian S; Kühn, Simone
2017-12-22
Humans can exploit recognition memory as a simple cue for judgment. The utility of recognition depends on the interplay with the environment, particularly on its predictive power (validity) in a domain. It is, therefore, an important question whether people are sensitive to differences in recognition validity between domains. Strategic, intra-individual changes in the reliance on recognition have not been investigated so far. The present study fills this gap by scrutinizing within-person changes in using a frugal strategy, the recognition heuristic (RH), across two task domains that differed in recognition validity. The results showed adaptive changes in the reliance on recognition between domains. However, these changes were neither associated with the individual recognition validities nor with corresponding changes in these validities. These findings support a domain-adaptivity explanation, suggesting that people have broader intuitions about the usefulness of recognition across different domains that are nonetheless sufficiently robust for adaptive decision making. The analysis of metacognitive confidence reports mirrored and extended these results. Like RH use, confidence ratings covaried with task domain, but not with individual recognition validities. The changes in confidence suggest that people may have metacognitive access to information about global differences between task domains, but not to individual cue validities.
Ahmad, Riaz; Naz, Saeeda; Afzal, Muhammad Zeshan; Amin, Sayed Hassan; Breuel, Thomas
2015-01-01
The presence of a large number of unique shapes called ligatures in cursive languages, along with variations due to scaling, orientation and location provides one of the most challenging pattern recognition problems. Recognition of the large number of ligatures is often a complicated task in oriental languages such as Pashto, Urdu, Persian and Arabic. Research on cursive script recognition often ignores the fact that scaling, orientation, location and font variations are common in printed cursive text. Therefore, these variations are not included in image databases and in experimental evaluations. This research uncovers challenges faced by Arabic cursive script recognition in a holistic framework by considering Pashto as a test case, because Pashto language has larger alphabet set than Arabic, Persian and Urdu. A database containing 8000 images of 1000 unique ligatures having scaling, orientation and location variations is introduced. In this article, a feature space based on scale invariant feature transform (SIFT) along with a segmentation framework has been proposed for overcoming the above mentioned challenges. The experimental results show a significantly improved performance of proposed scheme over traditional feature extraction techniques such as principal component analysis (PCA). PMID:26368566
Rubin, Leah H.; Wu, Minjie; Sundermann, Erin E.; Meyer, Vanessa J.; Smith, Rachael; Weber, Kathleen M.; Cohen, Mardge H.; Little, Deborah M.; Maki, Pauline M.
2016-01-01
HIV-infected women may be particularly vulnerable to verbal learning and memory deficits. One factor contributing to these deficits is high perceived stress, which is associated with prefrontal cortical (PFC) atrophy and memory outcomes sensitive to PFC function, including retrieval and semantic clustering. We examined the association between stress and PFC activation during a verbal memory task in 36 HIV-infected women from the Chicago Consortium of the Women’s Interagency HIV Study (WIHS) to better understand the role of the PFC in this stress-related impairment. Participants completed standardized measures of verbal learning and memory and stress (Perceived Stress Scale-10). We used functional magnetic resonance imaging to assess brain function while participants completed encoding and recognition phases of a verbal memory task. HIV-infected women with higher stress (scores in top tertile) performed worse on all verbal memory outcomes including strategic encoding (p’s<0.05) compared to HIV-infected women with lower stress (scores in lower two tertiles). Patterns of brain activation during recognition (but not encoding) differed between women with higher versus lower stress. During recognition, women with higher stress demonstrated greater deactivation in medial PFC and posterior cingulate cortex compared to women with lower stress (p’s<0.05). Greater deactivation in medial PFC marginally related to less efficient strategic retrieval (p=0.06). Similar results were found in analyses focusing on PTSD symptoms. Results suggest that stress might alter the function of the medial PFC in HIV-infected women resulting in less efficient strategic retrieval and deficits in verbal memory. PMID:27094924
Rubin, Leah H; Wu, Minjie; Sundermann, Erin E; Meyer, Vanessa J; Smith, Rachael; Weber, Kathleen M; Cohen, Mardge H; Little, Deborah M; Maki, Pauline M
2016-12-01
HIV-infected women may be particularly vulnerable to verbal learning and memory deficits. One factor contributing to these deficits is high perceived stress, which is associated with prefrontal cortical (PFC) atrophy and memory outcomes sensitive to PFC function, including retrieval and semantic clustering. We examined the association between stress and PFC activation during a verbal memory task in 36 HIV-infected women from the Chicago Consortium of the Women's Interagency HIV Study (WIHS) to better understand the role of the PFC in this stress-related impairment. Participants completed standardized measures of verbal learning and memory and stress (perceived stress scale-10). We used functional magnetic resonance imaging to assess brain function while participants completed encoding and recognition phases of a verbal memory task. HIV-infected women with higher stress (scores in top tertile) performed worse on all verbal memory outcomes including strategic encoding (p < 0.05) compared to HIV-infected women with lower stress (scores in lower two tertiles). Patterns of brain activation during recognition (but not encoding) differed between women with higher vs. lower stress. During recognition, women with higher stress demonstrated greater deactivation in medial PFC and posterior cingulate cortex compared to women with lower stress (p < 0.05). Greater deactivation in medial PFC marginally related to less efficient strategic retrieval (p = 0.06). Similar results were found in analyses focusing on PTSD symptoms. Results suggest that stress might alter the function of the medial PFC in HIV-infected women resulting in less efficient strategic retrieval and deficits in verbal memory.
Implicit phonological priming during visual word recognition.
Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C
2011-03-15
Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.
A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.
Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent
2007-07-20
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.
Spiegel, S; Chiu, A; James, A S; Jentsch, J D; Karlsgodt, K H
2015-11-01
Numerous studies have implicated DTNBP1, the gene encoding dystrobrevin-binding protein or dysbindin, as a candidate risk gene for schizophrenia, though this relationship remains somewhat controversial. Variation in dysbindin, and its location on chromosome 6p, has been associated with cognitive processes, including those relying on a complex system of glutamatergic and dopaminergic interactions. Dysbindin is one of the seven protein subunits that comprise the biogenesis of lysosome-related organelles complex 1 (BLOC-1). Dysbindin protein levels are lower in mice with null mutations in pallidin, another gene in the BLOC-1, and pallidin levels are lower in mice with null mutations in the dysbindin gene, suggesting that multiple subunit proteins must be present to form a functional oligomeric complex. Furthermore, pallidin and dysbindin have similar distribution patterns in a mouse and human brain. Here, we investigated whether the apparent correspondence of pallid and dysbindin at the level of gene expression is also found at the level of behavior. Hypothesizing a mutation leading to underexpression of either of these proteins should show similar phenotypic effects, we studied recognition memory in both strains using the novel object recognition task (NORT) and social novelty recognition task (SNRT). We found that mice with a null mutation in either gene are impaired on SNRT and NORT when compared with wild-type controls. These results support the conclusion that deficits consistent with recognition memory impairment, a cognitive function that is impaired in schizophrenia, result from either pallidin or dysbindin mutations, possibly through degradation of BLOC-1 expression and/or function. © 2015 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
Mitchnick, Krista A; Wideman, Cassidy E; Huff, Andrew E; Palmer, Daniel; McNaughton, Bruce L; Winters, Boyer D
2018-05-15
The capacity to recognize objects from different view-points or angles, referred to as view-invariance, is an essential process that humans engage in daily. Currently, the ability to investigate the neurobiological underpinnings of this phenomenon is limited, as few ethologically valid view-invariant object recognition tasks exist for rodents. Here, we report two complementary, novel view-invariant object recognition tasks in which rodents physically interact with three-dimensional objects. Prior to experimentation, rats and mice were given extensive experience with a set of 'pre-exposure' objects. In a variant of the spontaneous object recognition task, novelty preference for pre-exposed or new objects was assessed at various angles of rotation (45°, 90° or 180°); unlike control rodents, for whom the objects were novel, rats and mice tested with pre-exposed objects did not discriminate between rotated and un-rotated objects in the choice phase, indicating substantial view-invariant object recognition. Secondly, using automated operant touchscreen chambers, rats were tested on pre-exposed or novel objects in a pairwise discrimination task, where the rewarded stimulus (S+) was rotated (180°) once rats had reached acquisition criterion; rats tested with pre-exposed objects re-acquired the pairwise discrimination following S+ rotation more effectively than those tested with new objects. Systemic scopolamine impaired performance on both tasks, suggesting involvement of acetylcholine at muscarinic receptors in view-invariant object processing. These tasks present novel means of studying the behavioral and neural bases of view-invariant object recognition in rodents. Copyright © 2018 Elsevier B.V. All rights reserved.
Representation of 3-Dimenstional Objects by the Rat Perirhinal Cortex
Burke, S.N.; Maurer, A.P.; Hartzell, A.L.; Nematollahi, S.; Uprety, A.; Wallace, J.L.; Barnes, C.A.
2012-01-01
The perirhinal cortex (PRC) is known to play an important role in object recognition. Little is known, however, regarding the activity of PRC neurons during the presentation of stimuli that are commonly used for recognition memory tasks in rodents, that is, 3-dimensional objects. Rats in the present study were exposed to 3-dimensional objects while they traversed a circular track for food reward. Under some behavioral conditions the track contained novel objects, familiar objects, or no objects. Approximately 38% of PRC neurons demonstrated ‘object fields’ (a selective increase in firing at the location of one or more objects). Although the rats spent more time exploring the objects when they were novel compared to familiar, indicating successful recognition memory, the proportion of object fields and the firing rates of PRC neurons were not affected by the rats’ previous experience with the objects. Together these data indicate that the activity of PRC cells is powerfully affected by the presence of objects while animals navigate through an environment, but under these conditions, the firing patterns are not altered by the relative novelty of objects during successful object recognition. PMID:22987680
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Lexical leverage: Category knowledge boosts real-time novel word recognition in two-year- olds
Borovsky, Arielle; Ellis, Erica M.; Evans, Julia L.; Elman, Jeffrey L.
2016-01-01
Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants to leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real-time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label -object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. PMID:26452444
EEG based topography analysis in string recognition task
NASA Astrophysics Data System (ADS)
Ma, Xiaofei; Huang, Xiaolin; Shen, Yuxiaotong; Qin, Zike; Ge, Yun; Chen, Ying; Ning, Xinbao
2017-03-01
Vision perception and recognition is a complex process, during which different parts of brain are involved depending on the specific modality of the vision target, e.g. face, character, or word. In this study, brain activities in string recognition task compared with idle control state are analyzed through topographies based on multiple measurements, i.e. sample entropy, symbolic sample entropy and normalized rhythm power, extracted from simultaneously collected scalp EEG. Our analyses show that, for most subjects, both symbolic sample entropy and normalized gamma power in string recognition task are significantly higher than those in idle state, especially at locations of P4, O2, T6 and C4. It implies that these regions are highly involved in string recognition task. Since symbolic sample entropy measures complexity, from the perspective of new information generation, and normalized rhythm power reveals the power distributions in frequency domain, complementary information about the underlying dynamics can be provided through the two types of indices.
Repetition and brain potentials when recognizing natural scenes: task and emotion differences
Bradley, Margaret M.; Codispoti, Maurizio; Karlsson, Marie; Lang, Peter J.
2013-01-01
Repetition has long been known to facilitate memory performance, but its effects on event-related potentials (ERPs), measured as an index of recognition memory, are less well characterized. In Experiment 1, effects of both massed and distributed repetition on old–new ERPs were assessed during an immediate recognition test that followed incidental encoding of natural scenes that also varied in emotionality. Distributed repetition at encoding enhanced both memory performance and the amplitude of an old–new ERP difference over centro-parietal sensors. To assess whether these repetition effects reflect encoding or retrieval differences, the recognition task was replaced with passive viewing of old and new pictures in Experiment 2. In the absence of an explicit recognition task, ERPs were completely unaffected by repetition at encoding, and only emotional pictures prompted a modestly enhanced old–new difference. Taken together, the data suggest that repetition facilitates retrieval processes and that, in the absence of an explicit recognition task, differences in old–new ERPs are only apparent for affective cues. PMID:22842817
Differences in neuropsychological performance between subtypes of obsessive-compulsive disorder.
Nedeljkovic, Maja; Kyrios, Michael; Moulding, Richard; Doron, Guy; Wainwright, Kylie; Pantelis, Chris; Purcell, Rosemary; Maruff, Paul
2009-03-01
Neuropsychological studies have suggested that frontal-striatal dysfunction plays a role in obsessive-compulsive disorder (OCD), although findings have been inconsistent, possibly due to heterogeneity within the disorder and methodological issues. The purpose of the present study was therefore to compare the neuropsychological performance of different subtypes of OCD and matched non-clinical controls (NCs) on the Cambridge Automated Neuropsychological Test Battery (CANTAB). Fifty-nine OCD patients and 59 non-clinical controls completed selected tests from CANTAB examining executive function, visual memory and attentional-set shifting. Depression, anxiety and OCD symptoms were also assessed. From 59 OCD patients, four subtypes were identified: (i) washers; (ii) checkers; (iii) obsessionals; and (iv) mixed symptom profile. Comparisons between washers, checkers, obsessionals and NCs indicated few differences, although checkers were generally found to exhibit poorer performance on spatial working memory, while obsessionals performed poorly on the spatial recognition task. Both checkers and the mixed subgroups showed slowed initial movement on the Stockings of Cambridge planning task and poorer pattern recognition relative to NCs. Overall the results suggested greater impairments in performance on neuropsychological tasks in checkers relative to other subtypes, although the observed effects were small and the conclusions limited by the small subtype samples. Future research will need to account for factors that influence neuropsychological performance in OCD subtypes.
Remember to blink: Reduced attentional blink following instructions to forget.
Taylor, Tracy L
2018-04-24
This study used rapid serial visual presentation (RSVP) to determine whether, in an item-method directed forgetting task, study word processing ends earlier for forget words than for remember words. The critical manipulation required participants to monitor an RSVP stream of black nonsense strings in which a single blue word was embedded. The next item to follow the word was a string of red fs that instructed the participant to forget the word or green rs that instructed the participant to remember the word. After the memory instruction, a probe string of black xs or os appeared at postinstruction positions 1-8. Accuracy in reporting the identity of the probe string revealed an attenuated attentional blink following instructions to forget. A yes-no recognition task that followed the study trials confirmed a directed forgetting effect, with better recognition of remember words than forget words. Considered in the context of control conditions that required participants to commit either all or none of the study words to memory, the pattern of probe identification accuracy following the directed forgetting task argues that an intention to forget releases limited-capacity attentional resources sooner than an instruction to remember-despite participants needing to maintain an ongoing rehearsal set in both cases.
ERIC Educational Resources Information Center
Farber, Ellen A.; Moely, Barbara E.
Results of two studies investigating children's abilities to use different kinds of cues to infer another's affective state are reported in this paper. In the first study, 48 children (3, 4, and 6 to 7 years of age) were given three different kinds of tasks (interpersonal task, facial recognition task, and vocal recognition task). A cross-age…
Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M
2002-09-01
(1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. 30 subjects with AMD (age range 66-90 years; visual acuity 0.4-1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = -0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = -0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance.
Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M
2002-01-01
Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. Methods: 30 subjects with AMD (age range 66–90 years; visual acuity 0.4–1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Results: Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = −0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = −0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Conclusion: Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance. PMID:12185131
Task-Dependent Masked Priming Effects in Visual Word Recognition
Kinoshita, Sachiko; Norris, Dennis
2012-01-01
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316
Influence of auditory attention on sentence recognition captured by the neural phase.
Müller, Jana Annina; Kollmeier, Birger; Debener, Stefan; Brand, Thomas
2018-03-07
The aim of this study was to investigate whether attentional influences on speech recognition are reflected in the neural phase entrained by an external modulator. Sentences were presented in 7 Hz sinusoidally modulated noise while the neural response to that modulation frequency was monitored by electroencephalogram (EEG) recordings in 21 participants. We implemented a selective attention paradigm including three different attention conditions while keeping physical stimulus parameters constant. The participants' task was either to repeat the sentence as accurately as possible (speech recognition task), to count the number of decrements implemented in modulated noise (decrement detection task), or to do both (dual task), while the EEG was recorded. Behavioural analysis revealed reduced performance in the dual task condition for decrement detection, possibly reflecting limited cognitive resources. EEG analysis revealed no significant differences in power for the 7 Hz modulation frequency, but an attention-dependent phase difference between tasks. Further phase analysis revealed a significant difference 500 ms after sentence onset between trials with correct and incorrect responses for speech recognition, indicating that speech recognition performance and the neural phase are linked via selective attention mechanisms, at least shortly after sentence onset. However, the neural phase effects identified were small and await further investigation. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Acquired prosopagnosia without word recognition deficits.
Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley
2015-01-01
It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.
The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands
ERIC Educational Resources Information Center
Diana, Rachel A.; Reder, Lynne M.
2006-01-01
Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative…
Convolutional neural networks and face recognition task
NASA Astrophysics Data System (ADS)
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
2017-09-01
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
Multi-task learning with group information for human action recognition
NASA Astrophysics Data System (ADS)
Qian, Li; Wu, Song; Pu, Nan; Xu, Shulin; Xiao, Guoqiang
2018-04-01
Human action recognition is an important and challenging task in computer vision research, due to the variations in human motion performance, interpersonal differences and recording settings. In this paper, we propose a novel multi-task learning framework with group information (MTL-GI) for accurate and efficient human action recognition. Specifically, we firstly obtain group information through calculating the mutual information according to the latent relationship between Gaussian components and action categories, and clustering similar action categories into the same group by affinity propagation clustering. Additionally, in order to explore the relationships of related tasks, we incorporate group information into multi-task learning. Experimental results evaluated on two popular benchmarks (UCF50 and HMDB51 datasets) demonstrate the superiority of our proposed MTL-GI framework.
Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan
2015-01-01
Objective To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. Methods We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. Results P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. Conclusions The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures. PMID:26529407
Multiple confidence estimates as indices of eyewitness memory.
Sauer, James D; Brewer, Neil; Weber, Nathan
2008-08-01
Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated. Classification algorithms were applied to participants' confidence data to determine when a confidence value or pattern of confidence values indicated a positive response. Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence group for target-absent trials but not for target-present trials. Experiment 2 used a face mini-lineup task and found reduced target-present accuracy offset by large gains in target-absent accuracy. Using a standard lineup paradigm, Experiments 3 and 4 also found improved classification accuracy for target-absent lineups and, with a more sophisticated algorithm, for target-present lineups. This demonstrates the accessibility of evidence for recognition memory decisions and points to a more sensitive index of memory quality than is afforded by binary decisions.
Distributed cooperating processes in a mobile robot control system
NASA Technical Reports Server (NTRS)
Skillman, Thomas L., Jr.
1988-01-01
A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.
Encoding processes during retrieval tasks.
Buckner, R L; Wheeler, M E; Sheridan, M A
2001-04-01
Episodic memory encoding is pervasive across many kinds of task and often arises as a secondary processing effect in tasks that do not require intentional memorization. To illustrate the pervasive nature of information processing that leads to episodic encoding, a form of incidental encoding was explored based on the "Testing" phenomenon: The incidental-encoding task was an episodic memory retrieval task. Behavioral data showed that performing a memory retrieval task was as effective as intentional instructions at promoting episodic encoding. During fMRI imaging, subjects viewed old and new words and indicated whether they remembered them. Relevant to encoding, the fate of the new words was examined using a second, surprise test of recognition after the imaging session. fMRI analysis of those new words that were later remembered revealed greater activity in left frontal regions than those that were later forgotten - the same pattern of results as previously observed for traditional incidental and intentional episodic encoding tasks. This finding may offer a partial explanation for why repeated testing improves memory performance. Furthermore, the observation of correlates of episodic memory encoding during retrieval tasks challenges some interpretations that arise from direct comparisons between "encoding tasks" and "retrieval tasks" in imaging data. Encoding processes and their neural correlates may arise in many tasks, even those nominally labeled as retrieval tasks by the experimenter.
Task by stimulus interactions in brain responses during Chinese character processing.
Yang, Jianfeng; Wang, Xiaojuan; Shu, Hua; Zevin, Jason D
2012-04-02
In the visual word recognition literature, it is well understood that various stimulus effects interact with behavioral task. For example, effects of word frequency are exaggerated and effects of spelling-to-sound regularity are reduced in the lexical decision task, relative to reading aloud. Neuroimaging studies of reading often examine effects of task and stimulus properties on brain activity independently, but potential interactions between task demands and stimulus effects have not been extensively explored. To address this issue, we conducted lexical decision and symbol detection tasks using stimuli that varied parametrically in their word-likeness, and tested for task by stimulus class interactions. Interactions were found throughout the reading system, such that stimulus selectivity was observed during the lexical decision task, but not during the symbol detection task. Further, the pattern of stimulus selectivity was directly related to task difficulty, so that the strongest brain activity was observed to the most word-like stimuli that required "no" responses, whereas brain activity to words, which elicit rapid and accurate "yes" responses were relatively weak. This is in line with models that argue for task-dependent specialization of brain regions, and contrasts with the notion of task-independent stimulus selectivity in the reading system. Copyright © 2012 Elsevier Inc. All rights reserved.
van Veluw, Susanne J; Chance, Steven A
2014-03-01
The perception of self and others is a key aspect of social cognition. In order to investigate the neurobiological basis of this distinction we reviewed two classes of task that study self-awareness and awareness of others (theory of mind, ToM). A reliable task to measure self-awareness is the recognition of one's own face in contrast to the recognition of others' faces. False-belief tasks are widely used to identify neural correlates of ToM as a measure of awareness of others. We performed an activation likelihood estimation meta-analysis, using the fMRI literature on self-face recognition and false-belief tasks. The brain areas involved in performing false-belief tasks were the medial prefrontal cortex (MPFC), bilateral temporo-parietal junction, precuneus, and the bilateral middle temporal gyrus. Distinct self-face recognition regions were the right superior temporal gyrus, the right parahippocampal gyrus, the right inferior frontal gyrus/anterior cingulate cortex, and the left inferior parietal lobe. Overlapping brain areas were the superior temporal gyrus, and the more ventral parts of the MPFC. We confirmed that self-recognition in contrast to recognition of others' faces, and awareness of others involves a network that consists of separate, distinct neural pathways, but also includes overlapping regions of higher order prefrontal cortex where these processes may be combined. Insights derived from the neurobiology of disorders such as autism and schizophrenia are consistent with this notion.
Familiarity and face emotion recognition in patients with schizophrenia.
Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto
2014-01-01
To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.
Impact of Fetal-Neonatal Iron Deficiency on Recognition Memory at 2 Months of Age.
Geng, Fengji; Mai, Xiaoqin; Zhan, Jianying; Xu, Lin; Zhao, Zhengyan; Georgieff, Michael; Shao, Jie; Lozoff, Betsy
2015-12-01
To assess the effects of fetal-neonatal iron deficiency on recognition memory in early infancy. Perinatal iron deficiency delays or disrupts hippocampal development in animal models and thus may impair related neural functions in human infants, such as recognition memory. Event-related potentials were used in an auditory recognition memory task to compare 2-month-old Chinese infants with iron sufficiency or deficiency at birth. Fetal-neonatal iron deficiency was defined 2 ways: high zinc protoporphyrin/heme ratio (ZPP/H > 118 μmol/mol) or low serum ferritin (<75 μg/L) in cord blood. Late slow wave was used to measure infant recognition of mother's voice. Event related potentials patterns differed significantly for fetal-neonatal iron deficiency as defined by high cord ZPP/H but not low ferritin. Comparing 35 infants with iron deficiency (ZPP/H > 118 μmol/mol) to 92 with lower ZPP/H (iron-sufficient), only infants with iron sufficiency showed larger late slow wave amplitude for stranger's voice than mother's voice in frontal-central and parietal-occipital locations, indicating the recognition of mother's voice. Infants with iron sufficiency showed electrophysiological evidence of recognizing their mother's voice, whereas infants with fetal-neonatal iron deficiency did not. Their poorer auditory recognition memory at 2 months of age is consistent with effects of fetal-neonatal iron deficiency on the developing hippocampus. Copyright © 2015 Elsevier Inc. All rights reserved.
Does cortisol modulate emotion recognition and empathy?
Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja
2016-04-01
Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cross, Laura; Brown, Malcolm W; Aggleton, John P; Warburton, E Clea
2012-12-21
In humans recognition memory deficits, a typical feature of diencephalic amnesia, have been tentatively linked to mediodorsal thalamic nucleus (MD) damage. Animal studies have occasionally investigated the role of the MD in single-item recognition, but have not systematically analyzed its involvement in other recognition memory processes. In Experiment 1 rats with bilateral excitotoxic lesions in the MD or the medial prefrontal cortex (mPFC) were tested in tasks that assessed single-item recognition (novel object preference), associative recognition memory (object-in-place), and recency discrimination (recency memory task). Experiment 2 examined the functional importance of the interactions between the MD and mPFC using disconnection techniques. Unilateral excitotoxic lesions were placed in both the MD and the mPFC in either the same (MD + mPFC Ipsi) or opposite hemispheres (MD + mPFC Contra group). Bilateral lesions in the MD or mPFC impaired object-in-place and recency memory tasks, but had no effect on novel object preference. In Experiment 2 the MD + mPFC Contra group was significantly impaired in the object-in-place and recency memory tasks compared with the MD + mPFC Ipsi group, but novel object preference was intact. Thus, connections between the MD and mPFC are critical for recognition memory when the discriminations involve associative or recency information. However, the rodent MD is not necessary for single-item recognition memory.
Recognition intent and visual word recognition.
Wang, Man-Ying; Ching, Chi-Le
2009-03-01
This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.
Large-area settlement pattern recognition from Landsat-8 data
NASA Astrophysics Data System (ADS)
Wieland, Marc; Pittore, Massimiliano
2016-09-01
The study presents an image processing and analysis pipeline that combines object-based image analysis with a Support Vector Machine to derive a multi-layered settlement product from Landsat-8 data over large areas. 43 image scenes are processed over large parts of Central Asia (Southern Kazakhstan, Kyrgyzstan, Tajikistan and Eastern Uzbekistan). The main tasks tackled by this work include built-up area identification, settlement type classification and urban structure types pattern recognition. Besides commonly used accuracy assessments of the resulting map products, thorough performance evaluations are carried out under varying conditions to tune algorithm parameters and assess their applicability for the given tasks. As part of this, several research questions are being addressed. In particular the influence of the improved spatial and spectral resolution of Landsat-8 on the SVM performance to identify built-up areas and urban structure types are evaluated. Also the influence of an extended feature space including digital elevation model features is tested for mountainous regions. Moreover, the spatial distribution of classification uncertainties is analyzed and compared to the heterogeneity of the building stock within the computational unit of the segments. The study concludes that the information content of Landsat-8 images is sufficient for the tested classification tasks and even detailed urban structures could be extracted with satisfying accuracy. Freely available ancillary settlement point location data could further improve the built-up area classification. Digital elevation features and pan-sharpening could, however, not significantly improve the classification results. The study highlights the importance of dynamically tuned classifier parameters, and underlines the use of Shannon entropy computed from the soft answers of the SVM as a valid measure of the spatial distribution of classification uncertainties.
Hyperspectral image analysis using artificial color
NASA Astrophysics Data System (ADS)
Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet
2010-03-01
By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.
Improved signal processing approaches in an offline simulation of a hybrid brain–computer interface
Brunner, Clemens; Allison, Brendan Z.; Krusienski, Dean J.; Kaiser, Vera; Müller-Putz, Gernot R.; Pfurtscheller, Gert; Neuper, Christa
2012-01-01
In a conventional brain–computer interface (BCI) system, users perform mental tasks that yield specific patterns of brain activity. A pattern recognition system determines which brain activity pattern a user is producing and thereby infers the user’s mental task, allowing users to send messages or commands through brain activity alone. Unfortunately, despite extensive research to improve classification accuracy, BCIs almost always exhibit errors, which are sometimes so severe that effective communication is impossible. We recently introduced a new idea to improve accuracy, especially for users with poor performance. In an offline simulation of a “hybrid” BCI, subjects performed two mental tasks independently and then simultaneously. This hybrid BCI could use two different types of brain signals common in BCIs – event-related desynchronization (ERD) and steady-state evoked potentials (SSEPs). This study suggested that such a hybrid BCI is feasible. Here, we re-analyzed the data from our initial study. We explored eight different signal processing methods that aimed to improve classification and further assess both the causes and the extent of the benefits of the hybrid condition. Most analyses showed that the improved methods described here yielded a statistically significant improvement over our initial study. Some of these improvements could be relevant to conventional BCIs as well. Moreover, the number of illiterates could be reduced with the hybrid condition. Results are also discussed in terms of dual task interference and relevance to protocol design in hybrid BCIs. PMID:20153371
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).
Patterns of source monitoring bias in incarcerated youths with and without conduct problems.
Morosan, Larisa; Badoud, Deborah; Salaminios, George; Eliez, Stephan; Van der Linden, Martial; Heller, Patrick; Debbané, Martin
2018-01-01
Antisocial individuals present behaviours that violate the social norms and the rights of others. In the present study, we examine whether biases in monitoring the self-generated cognitive material might be linked to antisocial manifestations during adolescence. We further examine the association with psychopathic traits and conduct problems (CPs). Sixty-five incarcerated adolescents (IAs; M age = 15.85, SD = 1.30) and 88 community adolescents (CAs; M age = 15.78, SD = 1.60) participated in our study. In the IA group, 28 adolescents presented CPs (M age = 16.06, SD = 1.41) and 19 did not meet the diagnostic criteria for CPs (M age = 15.97, SD = 1.20). Source monitoring was assessed through a speech-monitoring task, using items requiring different levels of cognitive effort; recognition and source-monitoring bias scores (internalising and externalising biases) were calculated. Between-group comparisons indicate greater overall biases and different patterns of biases in the source monitoring. IA participants manifest a greater externalising bias, whereas CA participants present a greater internalising bias. In addition, IA with CPs present different patterns of item recognition. These results indicate that the two groups of adolescents present different types of source-monitoring bias for self-generated speech. In addition, the IAs with CPs present impairments in item recognition. Future studies may examine the developmental implications of self-monitoring biases in the perseverance of antisocial behaviours from adolescence to adulthood.
Oh, Jooyoung; Chun, Ji-Won; Kim, Eunseong; Park, Hae-Jeong; Lee, Boreom; Kim, Jae-Jin
2017-01-01
Patients with schizophrenia exhibit several cognitive deficits, including memory impairment. Problems with recognition memory can hinder socially adaptive behavior. Previous investigations have suggested that altered activation of the frontotemporal area plays an important role in recognition memory impairment. However, the cerebral networks related to these deficits are not known. The aim of this study was to elucidate the brain networks required for recognizing socially relevant information in patients with schizophrenia performing an old-new recognition task. Sixteen patients with schizophrenia and 16 controls participated in this study. First, the subjects performed the theme-identification task during functional magnetic resonance imaging. In this task, pictures depicting social situations were presented with three words, and the subjects were asked to select the best theme word for each picture. The subjects then performed an old-new recognition task in which they were asked to discriminate whether the presented words were old or new. Task performance and neural responses in the old-new recognition task were compared between the subject groups. An independent component analysis of the functional connectivity was performed. The patients with schizophrenia exhibited decreased discriminability and increased activation of the right superior temporal gyrus compared with the controls during correct responses. Furthermore, aberrant network activities were found in the frontopolar and language comprehension networks in the patients. The functional connectivity analysis showed aberrant connectivity in the frontopolar and language comprehension networks in the patients with schizophrenia, and these aberrations possibly contribute to their low recognition performance and social dysfunction. These results suggest that the frontopolar and language comprehension networks are potential therapeutic targets in patients with schizophrenia.
Modulation of working memory function by motivation through loss-aversion.
Krawczyk, Daniel C; D'Esposito, Mark
2013-04-01
Cognitive performance is affected by motivation. Few studies, however, have investigated the neural mechanisms of the influence of motivation through potential monetary punishment on working memory. We employed functional MRI during a delayed recognition task that manipulated top-down control demands with added monetary incentives to some trials in the form of potential losses of bonus money. Behavioral performance on the task was influenced by loss-threatening incentives in the form of faster and more accurate performance. As shown previously, we found enhancement of activity for relevant stimuli occurs throughout all task periods (e.g., stimulus encoding, maintenance, and response) in both prefrontal and visual association cortex. Further, these activation patterns were enhanced for trials with possible monetary loss relative to nonincentive trials. During the incentive cue, the amygdala and striatum showed significantly greater activation when money was at a possible loss on the trial. We also evaluated patterns of functional connectivity between regions responsive to monetary consequences and prefrontal areas responsive to the task. This analysis revealed greater delay period connectivity between and the left insula and prefrontal cortex with possible monetary loss relative to nonincentive trials. Overall, these results reveal that incentive motivation can modulate performance on working memory tasks through top-down signals via amplification of activity within prefrontal and visual association regions selective to processing the perceptual inputs of the stimuli to be remembered. Copyright © 2011 Wiley Periodicals, Inc.
Modulation of working memory function by motivation through loss-aversion
Krawczyk, Daniel C.; D’Esposito, Mark
2012-01-01
Cognitive performance is affected by motivation. Few studies, however, have investigated the neural mechanisms of the influence of motivation through potential monetary punishment on working memory. We employed functional MRI during a delayed recognition task that manipulated top-down control demands with added monetary incentives to some trials in the form of potential losses of bonus money. Behavioral performance on the task was influenced by loss-threatening incentives in the form of faster and more accurate performance. As shown previously, we found enhancement of activity for relevant stimuli occurs throughout all task periods (e.g. stimulus encoding, maintenance, and response) in both prefrontal and visual association cortex. Further, these activation patterns were enhanced for trials with possible monetary loss relative to non-incentive trials. During the incentive cue, the amygdala and striatum showed significantly greater activation when money was at a possible loss on the trial. We also evaluated patterns of functional connectivity between regions responsive to monetary consequences and prefrontal areas responsive to the task. This analysis revealed greater delay period connectivity between and the left insula and prefrontal cortex with possible monetary loss relative to non-incentive trials. Overall, these results reveal that incentive motivation can modulate performance on working memory tasks through top-down signals via amplification of activity within prefrontal and visual association regions selective to processing the perceptual inputs of the stimuli to be remembered. PMID:22113962
Cognitive and neural mechanisms of decision biases in recognition memory.
Windmann, Sabine; Urbach, Thomas P; Kutas, Marta
2002-08-01
In recognition memory tasks, stimuli can be classified as "old" either on the basis of accurate memory or a bias to respond "old", yet bias has received little attention in the cognitive neuroscience literature. Here we examined the pattern and timing of bias-related effects in event-related brain potentials (ERPs) to determine whether the bias is linked more to memory retrieval or to response verification processes. Participants were divided into a High Bias and a Low Bias group according to their bias to respond "old". These groups did not differ in recognition accuracy or in the ERP pattern to items that actually were old versus new (Objective Old/New Effect). However, when the old/new distinction was based on each subject's perspective, i.e. when items judged "old" were compared with those judged "new" (Subjective Old/New Effect), significant group differences were observed over prefrontal sites with a timing (300-500 ms poststimulus) more consistent with bias acting early on memory retrieval processes than on post-retrieval response verification processes. In the standard old/new effect (Hits vs Correct Rejections), these group differences were intermediate to those for the Objective and the Subjective comparisons, indicating that such comparisons are confounded by response bias. We propose that these biases are top-down controlled processes mediated by prefrontal cortex areas.
The human reproductive pattern and the changes in women's roles.
Díaz, S
1994-12-01
A broader evolutionary perspective and an understanding of our reproductive behavior in the context of a constantly changing society can contribute to the debate on breastfeeding and reproductive roles of both women and men. Breastfeeding is women's work and should be valued as such. Families and societies need to accept their share of the task of supporting women to breastfeed, to include favorable conditions at the workplace, adequate health care, better nutrition, and recognition of the woman's role and importance.
Improving EMG based classification of basic hand movements using EMD.
Sapsanis, Christos; Georgoulas, George; Tzes, Anthony; Lymberopoulos, Dimitrios
2013-01-01
This paper presents a pattern recognition approach for the identification of basic hand movements using surface electromyographic (EMG) data. The EMG signal is decomposed using Empirical Mode Decomposition (EMD) into Intrinsic Mode Functions (IMFs) and subsequently a feature extraction stage takes place. Various combinations of feature subsets are tested using a simple linear classifier for the detection task. Our results suggest that the use of EMD can increase the discrimination ability of the conventional feature sets extracted from the raw EMG signal.
The use of ERTS imagery in reservoir management and operation
NASA Technical Reports Server (NTRS)
Cooper, S. (Principal Investigator)
1973-01-01
There are no author-identified significant results in this report. Preliminary analysis of ERTS-1 imagery suggests that the configuration and areal coverage of surface waters, as well as other hydrologically related terrain features, may be obtained from ERTS-1 imagery to an extent that would be useful. Computer-oriented pattern recognition techniques are being developed to help automate the identification and analysis of hydrologic features. Considerable man-machine interaction is required while training the computer for these tasks.
A fuzzy clustering algorithm to detect planar and quadric shapes
NASA Technical Reports Server (NTRS)
Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa
1992-01-01
In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.
Autonomous operations through onboard artificial intelligence
NASA Technical Reports Server (NTRS)
Sherwood, R. L.; Chien, S.; Castano, R.; Rabideau, G.
2002-01-01
The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting. Demonstration of these capabilities in a flight environment will open up tremendous new opportunities in planetary science, space physics, and earth science that would be unreachable without this technology.
Zhang, Jianhua; Yin, Zhong; Wang, Rubin
2017-01-01
This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.
Approach to recognition of flexible form for credit card expiration date recognition as example
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Nikolaev, Dmitry P.; Ingacheva, Anastasia; Skoryukina, Natalya
2015-12-01
In this paper we consider a task of finding information fields within document with flexible form for credit card expiration date field as example. We discuss main difficulties and suggest possible solutions. In our case this task is to be solved on mobile devices therefore computational complexity has to be as low as possible. In this paper we provide results of the analysis of suggested algorithm. Error distribution of the recognition system shows that suggested algorithm solves the task with required accuracy.
Bayesian multi-task learning for decoding multi-subject neuroimaging data.
Marquand, Andre F; Brammer, Michael; Williams, Steven C R; Doyle, Orla M
2014-05-15
Decoding models based on pattern recognition (PR) are becoming increasingly important tools for neuroimaging data analysis. In contrast to alternative (mass-univariate) encoding approaches that use hierarchical models to capture inter-subject variability, inter-subject differences are not typically handled efficiently in PR. In this work, we propose to overcome this problem by recasting the decoding problem in a multi-task learning (MTL) framework. In MTL, a single PR model is used to learn different but related "tasks" simultaneously. The primary advantage of MTL is that it makes more efficient use of the data available and leads to more accurate models by making use of the relationships between tasks. In this work, we construct MTL models where each subject is modelled by a separate task. We use a flexible covariance structure to model the relationships between tasks and induce coupling between them using Gaussian process priors. We present an MTL method for classification problems and demonstrate a novel mapping method suitable for PR models. We apply these MTL approaches to classifying many different contrasts in a publicly available fMRI dataset and show that the proposed MTL methods produce higher decoding accuracy and more consistent discriminative activity patterns than currently used techniques. Our results demonstrate that MTL provides a promising method for multi-subject decoding studies by focusing on the commonalities between a group of subjects rather than the idiosyncratic properties of different subjects. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L
2015-04-01
Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2) = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.
Facial emotion recognition is inversely correlated with tremor severity in essential tremor.
Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G
2014-04-01
We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.
ERIC Educational Resources Information Center
Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.
2007-01-01
The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…
Acute Alcohol Effects on Repetition Priming and Word Recognition Memory with Equivalent Memory Cues
ERIC Educational Resources Information Center
Ray, Suchismita; Bates, Marsha E.
2006-01-01
Acute alcohol intoxication effects on memory were examined using a recollection-based word recognition memory task and a repetition priming task of memory for the same information without explicit reference to the study context. Memory cues were equivalent across tasks; encoding was manipulated by varying the frequency of occurrence (FOC) of words…
Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.
Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas
2015-08-01
Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease
Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas
2016-01-01
Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443
Towards pattern generation and chaotic series prediction with photonic reservoir computers
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge
2016-03-01
Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.
Some Memories are Odder than Others: Judgments of Episodic Oddity Violate Known Decision Rules
O’Connor, Akira R.; Guhl, Emily N.; Cox, Justin C.; Dobbins, Ian G.
2011-01-01
Current decision models of recognition memory are based almost entirely on one paradigm, single item old/new judgments accompanied by confidence ratings. This task results in receiver operating characteristics (ROCs) that are well fit by both signal-detection and dual-process models. Here we examine an entirely new recognition task, the judgment of episodic oddity, whereby participants select the mnemonically odd members of triplets (e.g., a new item hidden among two studied items). Using the only two known signal-detection rules of oddity judgment derived from the sensory perception literature, the unequal variance signal-detection model predicted that an old item among two new items would be easier to discover than a new item among two old items. In contrast, four separate empirical studies demonstrated the reverse pattern: triplets with two old items were the easiest to resolve. This finding was anticipated by the dual-process approach as the presence of two old items affords the greatest opportunity for recollection. Furthermore, a bootstrap-fed Monte Carlo procedure using two independent datasets demonstrated that the dual-process parameters typically observed during single item recognition correctly predict the current oddity findings, whereas unequal variance signal-detection parameters do not. Episodic oddity judgments represent a case where dual- and single-process predictions qualitatively diverge and the findings demonstrate that novelty is “odder” than familiarity. PMID:22833695
Infrared Cephalic-Vein to Assist Blood Extraction Tasks: Automatic Projection and Recognition
NASA Astrophysics Data System (ADS)
Lagüela, S.; Gesto, M.; Riveiro, B.; González-Aguilera, D.
2017-05-01
Thermal infrared band is not commonly used in photogrammetric and computer vision algorithms, mainly due to the low spatial resolution of this type of imagery. However, this band captures sub-superficial information, increasing the capabilities of visible bands regarding applications. This fact is especially important in biomedicine and biometrics, allowing the geometric characterization of interior organs and pathologies with photogrammetric principles, as well as the automatic identification and labelling using computer vision algorithms. This paper presents advances of close-range photogrammetry and computer vision applied to thermal infrared imagery, with the final application of Augmented Reality in order to widen its application in the biomedical field. In this case, the thermal infrared image of the arm is acquired and simultaneously projected on the arm, together with the identification label of the cephalic-vein. This way, blood analysts are assisted in finding the vein for blood extraction, especially in those cases where the identification by the human eye is a complex task. Vein recognition is performed based on the Gaussian temperature distribution in the area of the vein, while the calibration between projector and thermographic camera is developed through feature extraction and pattern recognition. The method is validated through its application to a set of volunteers, with different ages and genres, in such way that different conditions of body temperature and vein depth are covered for the applicability and reproducibility of the method.
Recognizing speech under a processing load: dissociating energetic from informational factors.
Mattys, Sven L; Brooks, Joanna; Cooke, Martin
2009-11-01
Effects of perceptual and cognitive loads on spoken-word recognition have so far largely escaped investigation. This study lays the foundations of a psycholinguistic approach to speech recognition in adverse conditions that draws upon the distinction between energetic masking, i.e., listening environments leading to signal degradation, and informational masking, i.e., listening environments leading to depletion of higher-order, domain-general processing resources, independent of signal degradation. We show that severe energetic masking, such as that produced by background speech or noise, curtails reliance on lexical-semantic knowledge and increases relative reliance on salient acoustic detail. In contrast, informational masking, induced by a resource-depleting competing task (divided attention or a memory load), results in the opposite pattern. Based on this clear dissociation, we propose a model of speech recognition that addresses not only the mapping between sensory input and lexical representations, as traditionally advocated, but also the way in which this mapping interfaces with general cognition and non-linguistic processes.
Test battery for measuring the perception and recognition of facial expressions of emotion
Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner
2014-01-01
Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528
Holistic processing, contact, and the other-race effect in face recognition.
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
2014-12-01
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
The processing of auditory and visual recognition of self-stimuli.
Hughes, Susan M; Nicholson, Shevon E
2010-12-01
This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.
Goal-Directed Modulation of Neural Memory Patterns: Implications for fMRI-Based Memory Detection.
Uncapher, Melina R; Boyd-Meredith, J Tyler; Chow, Tiffany E; Rissman, Jesse; Wagner, Anthony D
2015-06-03
Remembering a past event elicits distributed neural patterns that can be distinguished from patterns elicited when encountering novel information. These differing patterns can be decoded with relatively high diagnostic accuracy for individual memories using multivoxel pattern analysis (MVPA) of fMRI data. Brain-based memory detection--if valid and reliable--would have clear utility beyond the domain of cognitive neuroscience, in the realm of law, marketing, and beyond. However, a significant boundary condition on memory decoding validity may be the deployment of "countermeasures": strategies used to mask memory signals. Here we tested the vulnerability of fMRI-based memory detection to countermeasures, using a paradigm that bears resemblance to eyewitness identification. Participants were scanned while performing two tasks on previously studied and novel faces: (1) a standard recognition memory task; and (2) a task wherein they attempted to conceal their true memory state. Univariate analyses revealed that participants were able to strategically modulate neural responses, averaged across trials, in regions implicated in memory retrieval, including the hippocampus and angular gyrus. Moreover, regions associated with goal-directed shifts of attention and thought substitution supported memory concealment, and those associated with memory generation supported novelty concealment. Critically, whereas MVPA enabled reliable classification of memory states when participants reported memory truthfully, the ability to decode memory on individual trials was compromised, even reversing, during attempts to conceal memory. Together, these findings demonstrate that strategic goal states can be deployed to mask memory-related neural patterns and foil memory decoding technology, placing a significant boundary condition on their real-world utility. Copyright © 2015 the authors 0270-6474/15/358531-15$15.00/0.
Exploring Deep Learning and Sparse Matrix Format Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y.; Liao, C.; Shen, X.
We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less
Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji
2010-01-01
One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.
Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO
Tamati, Terrin N.; Pisoni, David B.
2015-01-01
Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842
ERIC Educational Resources Information Center
Golan, Ofer; Baron-Cohen, Simon; Golan, Yael
2008-01-01
Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on "basic" emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of "complex" emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general…
The CC chemokine receptor 5 regulates olfactory and social recognition in mice.
Kalkonde, Y V; Shelton, R; Villarreal, M; Sigala, J; Mishra, P K; Ahuja, S S; Barea-Rodriguez, E; Moretti, P; Ahuja, S K
2011-12-01
Chemokines are chemotactic cytokines that regulate cell migration and are thought to play an important role in a broad range of inflammatory diseases. The availability of chemokine receptor blockers makes them an important therapeutic target. In vitro, chemokines are shown to modulate neurotransmission. However, it is not very clear if chemokines play a role in behavior and cognition. Here we evaluated the role of CC chemokine receptor 5 (CCR5) in various behavioral tasks in mice using Wt (Ccr5⁺/⁺) and Ccr5-null (Ccr5⁻/⁻)mice. Ccr5⁻/⁻ mice showed enhanced social recognition. Administration of CC chemokine ligand 3 (CCL3), one of the CCR5-ligands, impaired social recognition. Since the social recognition task is dependent on the sense of olfaction, we tested olfactory recognition for social and non-social scents in these mice. Ccr5⁻/⁻ mice had enhanced olfactory recognition for both these scents indicating that enhanced performance in social recognition task could be due to enhanced olfactory recognition in these mice. Spatial memory and aversive memory were comparable in Wt and Ccr5⁻/⁻ mice. Collectively, these results suggest that chemokines/chemokine receptors might play an important role in olfactory recognition tasks in mice and to our knowledge represents the first direct demonstration of an in vivo role of CCR5 in modulating social behavior in mice. These studies are important as CCR5 blockers are undergoing clinical trials and can potentially modulate behavior. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Components of executive functioning in metamemory.
Mäntylä, Timo; Rönnlund, Michael; Kliegel, Matthias
2010-10-01
This study examined metamemory in relation to three basic executive functions (set shifting, working memory updating, and response inhibition) measured as latent variables. Young adults (Experiment 1) and middle-aged adults (Experiment 2) completed a set of executive functioning tasks and the Prospective and Retrospective Memory Questionnaire (PRMQ). In Experiment 1, source recall and face recognition tasks were included as indicators of objective memory performance. In both experiments, analyses of the executive functioning data yielded a two-factor solution, with the updating and inhibition tasks constituting a common factor and the shifting tasks a separate factor. Self-reported memory problems showed low predictive validity, but subjective and objective memory performance were related to different components of executive functioning. In both experiments, set shifting, but not updating and inhibition, was related to PRMQ, whereas source recall showed the opposite pattern of correlations in Experiment 1. These findings suggest that metamemorial judgments reflect selective effects of executive functioning and that individual differences in mental flexibility contribute to self-beliefs of efficacy.
Partially converted stereoscopic images and the effects on visual attention and memory
NASA Astrophysics Data System (ADS)
Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi
2015-03-01
This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct response rate in the partial 3D condition was significantly higher with the recognition task than in the other conditions. These results showed that partially converted 3D images tended to have a visual attraction and affect viewer's memory.
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Gardiner, John M; Brandt, Karen R; Vargha-Khadem, Faraneh; Baddeley, Alan; Mishkin, Mortimer
2006-09-01
We report the performance in four recognition memory experiments of Jon, a young adult with early-onset developmental amnesia whose episodic memory is gravely impaired in tests of recall, but seems relatively preserved in tests of recognition, and who has developed normal levels of performance in tests of intelligence and general knowledge. Jon's recognition performance was enhanced by deeper levels of processing in comparing a more meaningful study task with a less meaningful one, but not by task enactment in comparing performance of an action with reading an action phrase. Both of these variables normally enhance episodic remembering, which Jon claimed to experience. But Jon was unable to support that claim by recollecting what it was that he remembered. Taken altogether, the findings strongly imply that Jon's recognition performance entailed little genuine episodic remembering and that the levels-of-processing effects in Jon reflected semantic, not episodic, memory.
Event identification by acoustic signature recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dress, W.B.; Kercel, S.W.
1995-07-01
Many events of interest to the security commnnity produce acoustic emissions that are, in principle, identifiable as to cause. Some obvious examples are gunshots, breaking glass, takeoffs and landings of small aircraft, vehicular engine noises, footsteps (high frequencies when on gravel, very low frequencies. when on soil), and voices (whispers to shouts). We are investigating wavelet-based methods to extract unique features of such events for classification and identification. We also discuss methods of classification and pattern recognition specifically tailored for acoustic signatures obtained by wavelet analysis. The paper is divided into three parts: completed work, work in progress, and futuremore » applications. The completed phase has led to the successful recognition of aircraft types on landing and takeoff. Both small aircraft (twin-engine turboprop) and large (commercial airliners) were included in the study. The project considered the design of a small, field-deployable, inexpensive device. The techniques developed during the aircraft identification phase were then adapted to a multispectral electromagnetic interference monitoring device now deployed in a nuclear power plant. This is a general-purpose wavelet analysis engine, spanning 14 octaves, and can be adapted for other specific tasks. Work in progress is focused on applying the methods previously developed to speaker identification. Some of the problems to be overcome include recognition of sounds as voice patterns and as distinct from possible background noises (e.g., music), as well as identification of the speaker from a short-duration voice sample. A generalization of the completed work and the work in progress is a device capable of classifying any number of acoustic events-particularly quasi-stationary events such as engine noises and voices and singular events such as gunshots and breaking glass. We will show examples of both kinds of events and discuss their recognition likelihood.« less
Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang
2015-02-01
Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang
2014-01-01
Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461
The posterior parietal cortex in recognition memory: a neuropsychological study.
Haramati, Sharon; Soroker, Nachum; Dudai, Yadin; Levy, Daniel A
2008-01-01
Several recent functional neuroimaging studies have reported robust bilateral activation (L>R) in lateral posterior parietal cortex and precuneus during recognition memory retrieval tasks. It has not yet been determined what cognitive processes are represented by those activations. In order to examine whether parietal lobe-based processes are necessary for basic episodic recognition abilities, we tested a group of 17 first-incident CVA patients whose cortical damage included (but was not limited to) extensive unilateral posterior parietal lesions. These patients performed a series of tasks that yielded parietal activations in previous fMRI studies: yes/no recognition judgments on visual words and on colored object pictures and identifiable environmental sounds. We found that patients with left hemisphere lesions were not impaired compared to controls in any of the tasks. Patients with right hemisphere lesions were not significantly impaired in memory for visual words, but were impaired in recognition of object pictures and sounds. Two lesion--behavior analyses--area-based correlations and voxel-based lesion symptom mapping (VLSM)---indicate that these impairments resulted from extra-parietal damage, specifically to frontal and lateral temporal areas. These findings suggest that extensive parietal damage does not impair recognition performance. We suggest that parietal activations recorded during recognition memory tasks might reflect peri-retrieval processes, such as the storage of retrieved memoranda in a working memory buffer for further cognitive processing.
The effect of the feeling of resolution and recognition performance on the revelation effect.
Miura, Hiroshi; Itoh, Yuji
2016-10-01
The fact that engaging in a cognitive task before a recognition task increases the probability of "old" responses is known as the revelation effect. We used several cognitive tasks to examine whether the feeling of resolution, a key construct of the occurrence mechanism of the revelation effect, is related to the occurrence of the revelation effect. The results show that the revelation effect was not caused by a visual search task, which elicited the feeling of resolution, but caused by an unsolvable anagram task and an articulatory suppression task, which did not elicit the feeling of resolution. These results suggest that the revelation effect is not related to the feeling of resolution. Moreover, the revelation effect was likely to occur in participants who performed poorly on the recognition task. The result suggests that the revelation effect is inclined to occur when people depend more on familiarity than on recollection process. Copyright © 2016 Elsevier Inc. All rights reserved.
Ease of identifying words degraded by visual noise.
Barber, P; de la Mahotière, C
1982-08-01
A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.
Classification of Porcine Cranial Fracture Patterns Using a Fracture Printing Interface,.
Wei, Feng; Bucak, Serhat Selçuk; Vollner, Jennifer M; Fenton, Todd W; Jain, Anil K; Haut, Roger C
2017-01-01
Distinguishing between accidental and abusive head trauma in children can be difficult, as there is a lack of baseline data for pediatric cranial fracture patterns. A porcine head model has recently been developed and utilized in a series of studies to investigate the effects of impact energy level, surface type, and constraint condition on cranial fracture patterns. In the current study, an automated pattern recognition method, or a fracture printing interface (FPI), was developed to classify cranial fracture patterns that were associated with different impact scenarios documented in previous experiments. The FPI accurately predicted the energy level when the impact surface type was rigid. Additionally, the FPI was exceedingly successful in determining fractures caused by skulls being dropped with a high-level energy (97% accuracy). The FPI, currently developed on the porcine data, may in the future be transformed to the task of cranial fracture pattern classification for human infant skulls. © 2016 American Academy of Forensic Sciences.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Novelty preference in patients with developmental amnesia.
Munoz, M; Chadwick, M; Perez-Hernandez, E; Vargha-Khadem, F; Mishkin, M
2011-12-01
To re-examine whether or not selective hippocampal damage reduces novelty preference in visual paired comparison (VPC), we presented two different versions of the task to a group of patients with developmental amnesia (DA), each of whom sustained this form of pathology early in life. Compared with normal control participants, the DA group showed a delay-dependent reduction in novelty preference on one version of the task and an overall reduction on both versions combined. Because VPC is widely considered to be a measure of incidental recognition, the results appear to support the view that the hippocampus contributes to recognition memory. A difficulty for this conclusion, however, is that according to one current view the hippocampal contribution to recognition is limited to task conditions that encourage recollection of an item in some associated context, and according to another current view, to recognition of an item with the high confidence judgment that reflects a strong memory. By contrast, VPC, throughout which the participant remains entirely uninstructed other than to view the stimuli, would seem to lack such task conditions and so would likely lead to recognition based on familiarity rather than recollection or, alternatively, weak memories rather than strong. However, before concluding that the VPC impairment therefore contradicts both current views regarding the role of the hippocampus in recognition memory, two possibilities that would resolve this issue need to be investigated. One is that some variable in VPC, such as the extended period of stimulus encoding during familiarization, overrides its incidental nature, and, because this condition promotes either recollection- or strength-based recognition, renders the task hippocampal-dependent. The other possibility is that VPC, rather than providing a measure of incidental recognition, actually assesses an implicit, information-gathering process modulated by habituation, for which the hippocampus is also partly responsible, independent of its role in recognition. Copyright © 2010 Wiley Periodicals, Inc.
Sentence Verification, Sentence Recognition, and the Semantic-Episodic Distinction
ERIC Educational Resources Information Center
Shoben, Edward J.; And Others
1978-01-01
In an attempt to assess the validity of the distinction between episodic and semantic memory, this research examined the influence of two variables on sentence verification (presumably a semantic memory task) and sentence recognition (presumably an episodic memory task). ( Editor)
Frontoparietal tDCS Benefits Visual Working Memory in Older Adults With Low Working Memory Capacity.
Arciniega, Hector; Gözenman, Filiz; Jones, Kevin T; Stephens, Jaclyn A; Berryhill, Marian E
2018-01-01
Working memory (WM) permits maintenance of information over brief delays and is an essential executive function. Unfortunately, WM is subject to age-related decline. Some evidence supports the use of transcranial direct current stimulation (tDCS) to improve visual WM. A gap in knowledge is an understanding of the mechanism characterizing these tDCS linked effects. To address this gap, we compared the effects of two tDCS montages designed on visual working memory (VWM) performance. The bifrontal montage was designed to stimulate the heightened bilateral frontal activity observed in aging adults. The unilateral frontoparietal montage was designed to stimulate activation patterns observed in young adults. Participants completed three sessions (bilateral frontal, right frontoparietal, sham) of anodal tDCS (20 min, 2 mA). During stimulation, participants performed a visual long-term memory (LTM) control task and a visual WM task. There was no effect of tDCS on the LTM task. Participants receiving right unilateral tDCS showed a WM benefit. This pattern was most robust in older adults with low WM capacity. To address the concern that the key difference between the two tDCS montages could be tDCS over the posterior parietal cortex (PPC), we included new analyses from a previous study applying tDCS targeting the PPC paired with a recognition VWM task. No significant main effects were found. A subsequent experiment in young adults found no significant effect of either tDCS montage on either task. These data indicate that tDCS montage, age and WM capacity should be considered when designing tDCS protocols. We interpret these findings as suggestive that protocols designed to restore more youthful patterns of brain activity are superior to those that compensate for age-related changes.
Memory reactivation in healthy aging: evidence of stimulus-specific dedifferentiation.
St-Laurent, Marie; Abdi, Hervé; Bondad, Ashley; Buchsbaum, Bradley R
2014-03-19
We investigated how aging affects the neural specificity of mental replay, the act of conjuring up past experiences in one's mind. We used functional magnetic resonance imaging (fMRI) and multivariate pattern analysis to quantify the similarity between brain activity elicited by the perception and memory of complex multimodal stimuli. Young and older human adults viewed and mentally replayed short videos from long-term memory while undergoing fMRI. We identified a wide array of cortical regions involved in visual, auditory, and spatial processing that supported stimulus-specific representation at perception as well as during mental replay. Evidence of age-related dedifferentiation was subtle at perception but more salient during mental replay, and age differences at perception could not account for older adults' reduced neural reactivation specificity. Performance on a post-scan recognition task for video details correlated with neural reactivation in young but not in older adults, indicating that in-scan reactivation benefited post-scan recognition in young adults, but that some older adults may have benefited from alternative rehearsal strategies. Although young adults recalled more details about the video stimuli than older adults on a post-scan recall task, patterns of neural reactivation correlated with post-scan recall in both age groups. These results demonstrate that the mechanisms supporting recall and recollection are linked to accurate neural reactivation in both young and older adults, but that age affects how efficiently these mechanisms can support memory's representational specificity in a way that cannot simply be accounted for by degraded sensory processes.
Sheridan, Heather; Reingold, Eyal M
2017-03-01
To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.
Neural architecture underlying classification of face perception paradigms.
Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T
2015-10-01
We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Relationship between listeners' nonnative speech recognition and categorization abilities
Atagi, Eriko; Bent, Tessa
2015-01-01
Enhancement of the perceptual encoding of talker characteristics (indexical information) in speech can facilitate listeners' recognition of linguistic content. The present study explored this indexical-linguistic relationship in nonnative speech processing by examining listeners' performance on two tasks: nonnative accent categorization and nonnative speech-in-noise recognition. Results indicated substantial variability across listeners in their performance on both the accent categorization and nonnative speech recognition tasks. Moreover, listeners' accent categorization performance correlated with their nonnative speech-in-noise recognition performance. These results suggest that having more robust indexical representations for nonnative accents may allow listeners to more accurately recognize the linguistic content of nonnative speech. PMID:25618098
Semantic memory influences episodic retrieval by increased familiarity.
Wang, Yujuan; Mao, Xinrui; Li, Bingcan; Lu, Baoqing; Guo, Chunyan
2016-07-06
The role of familiarity in associative recognition has been investigated in a number of studies, which have indicated that familiarity can facilitate recognition under certain circumstances. The ability of a pre-experimentally existing common representation to boost the contribution of familiarity has rarely been investigated. In addition, although many studies have investigated the interactions between semantic memory and episodic retrieval, the conditions that influence the presence of specific patterns were unclear. This study aimed to address these two questions. We manipulated the degree of overlap between the two representations using synonym and nonsynonym pairs in an associative recognition task. Results indicated that an increased degree of overlap enhanced recognition performance. The analysis of event-related potentials effects in the test phase showed that synonym pairs elicited both types of old/rearranged effects, whereas nonsynonym pairs elicited a late old/rearranged effect. These results confirmed that a common representation, irrespective of source, was necessary for assuring the presence of familiarity, but a common representation could not distinguish associative recognition depending on familiarity alone. Moreover, our expected double dissociation between familiarity and recollection was absent, which indicated that mode selection may be influenced by the degree of distinctness between old and rearranged pairs rather than the degree of overlap between representations.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
2016-03-11
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif
2016-01-01
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368
False memory and level of processing effect: an event-related potential study.
Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara
2012-09-12
Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.
NASA Astrophysics Data System (ADS)
Deng, Jie; Yao, Jun; Dewald, Julius P. A.
2005-12-01
In this paper, we attempt to determine a subject's intention of generating torque at the shoulder or elbow, two neighboring joints, using scalp electroencephalogram signals from 163 electrodes for a brain-computer interface (BCI) application. To achieve this goal, we have applied a time-frequency synthesized spatial patterns (TFSP) BCI algorithm with a presorting procedure. Using this method, we were able to achieve an average recognition rate of 89% in four healthy subjects, which is comparable to the highest rates reported in the literature but now for tasks with much closer spatial representations on the motor cortex. This result demonstrates, for the first time, that the TFSP BCI method can be applied to separate intentions between generating static shoulder versus elbow torque. Furthermore, in this study, the potential application of this BCI algorithm for brain-injured patients was tested in one chronic hemiparetic stroke subject. A recognition rate of 76% was obtained, suggesting that this BCI method can provide a potential control signal for neural prostheses or other movement coordination improving devices for patients following brain injury.
Analysis of Spanish consonant recognition in 8-talker babble.
Moreno-Torres, Ignacio; Otero, Pablo; Luna-Ramírez, Salvador; Garayzábal Heinze, Elena
2017-05-01
This paper presents the results of a closed-set recognition task for 80 Spanish consonant-vowel sounds (16 C × 5 V, spoken by 2 talkers) in 8-talker babble (-6, -2, +2 dB). A ranking of resistance to noise was obtained using the signal detection d' measure, and confusion patterns were analyzed using a graphical method (confusion graphs). The resulting ranking indicated the existence of three resistance groups: (1) high resistance: /ʧ, s, ʝ/; (2) mid resistance: /r, l, m, n/; and (3) low resistance: /t, θ, x, ɡ, b, d, k, f, p/. Confusions involved mostly place of articulation and voicing errors, and occurred especially among consonants in the same resistance group. Three perceptual confusion groups were identified: the three low-energy fricatives (i.e., /f, θ, x/), the six stops (i.e., /p, t, k, b, d, ɡ/), and three consonants with clear formant structure (i.e., /m, n, l/). The factors underlying consonant resistance and confusion patterns are discussed. The results are compared with data from other languages.
Pattern Recognition Using Artificial Neural Network: A Review
NASA Astrophysics Data System (ADS)
Kim, Tai-Hoon
Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, artificial neural network techniques theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.
Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition
Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans
2015-01-01
The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602
Huff, Mark J; Yates, Tyler J; Balota, David A
2018-05-03
Recently, we have shown that two types of initial testing (recall of a list or guessing of critical items repeated over 12 study/test cycles) improved final recognition of related and unrelated word lists relative to restudy. These benefits were eliminated, however, when test instructions were manipulated within subjects and presented after study of each list, procedures designed to minimise expectancy of a specific type of upcoming test [Huff, Balota, & Hutchison, 2016. The costs and benefits of testing and guessing on recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1559-1572. doi: 10.1037/xlm0000269 ], suggesting that testing and guessing effects may be influenced by encoding strategies specific for the type of upcoming task. We follow-up these experiments by examining test-expectancy processes in guessing and testing. Testing and guessing benefits over restudy were not found when test instructions were presented either after (Experiment 1) or before (Experiment 2) a single study/task cycle was completed, nor were benefits found when instructions were presented before study/task cycles and the task was repeated three times (Experiment 3). Testing and guessing benefits emerged only when instructions were presented before a study/task cycle and the task was repeated six times (Experiments 4A and 4B). These experiments demonstrate that initial testing and guessing can produce memory benefits in recognition, but only following substantial task repetitions which likely promote task-expectancy processes.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Lasko, Thomas A; Denny, Joshua C; Levy, Mia A
2013-01-01
Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don't think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data - Electronic Medical Records - typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies.
Lasko, Thomas A.; Denny, Joshua C.; Levy, Mia A.
2013-01-01
Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don’t think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data – Electronic Medical Records – typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies. PMID:23826094
Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders
ERIC Educational Resources Information Center
Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia
2006-01-01
Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…
Prospective memory in context: Moving through a familiar space.
Smith, Rebekah E; Hunt, R Reed; Murray, Amy E
2017-02-01
Successful completion of delayed intentions is a common but important aspect of daily behavior. Such behavior requires not only memory for the intended action but also recognition of the opportunity to perform that action, known collectively as prospective memory. The fact that prospective memory tasks occur in the midst of other activities is captured in laboratory tasks by embedding the prospective memory task in an ongoing activity. In many cases the requirement to perform the prospective memory task results in a reduction in ongoing performance relative to when the ongoing task is performed alone. This is referred to as the cost to the ongoing task and reflects the allocation of attentional resources to the prospective memory task. The current study examined the pattern of cost across the ongoing task when the ongoing task provided contextual information that in turn allowed participants to anticipate when target events would occur within the ongoing task. The availability of contextual information reduced ongoing task response times overall, with an increase in response times closer to the target locations (Experiments 1-3). The fourth study, drawing on the Event Segmentation Theory, provided support for the proposal made by the Preparatory Attentional and Memory Processes theory of prospective memory that decisions about the allocation of attention to the prospective memory task are more likely to be made at points of transition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The role of visual imagery in the retention of information from sentences.
Drose, G S; Allen, G L
1994-01-01
We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.
Image pattern recognition supporting interactive analysis and graphical visualization
NASA Technical Reports Server (NTRS)
Coggins, James M.
1992-01-01
Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.
Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.
2014-01-01
Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469
Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S
2013-12-01
In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.
Movement cues aid face recognition in developmental prosopagnosia.
Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah
2015-11-01
Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).
ERIC Educational Resources Information Center
Janning, Ruth; Schatten, Carlotta; Schmidt-Thieme, Lars
2016-01-01
Recognising students' emotion, affect or cognition is a relatively young field and still a challenging task in the area of intelligent tutoring systems. There are several ways to use the output of these recognition tasks within the system. The approach most often mentioned in the literature is using it for giving feedback to the students. The…
Age-related differences in agenda-driven monitoring of format and task information
Mitchell, Karen J.; Ankudowich, Elizabeth; Durbin, Kelly A.; Greene, Erich J.; Johnson, Marcia K.
2013-01-01
Age-related source memory deficits may arise, in part, from changes in the agenda-driven processes that control what features of events are relevant during remembering. Using fMRI, we compared young and older adults on tests assessing source memory for format (picture, word) or encoding task (self-, other-referential), as well as on old-new recognition. Behaviorally, relative to old-new recognition, older adults showed disproportionate and equivalent deficits on both source tests compared to young adults. At encoding, both age groups showed expected activation associated with format in posterior visual processing areas, and with task in medial prefrontal cortex. At test, the groups showed similar selective, agenda-related activity in these representational areas. There were, however, marked age differences in the activity of control regions in lateral and medial prefrontal cortex and lateral parietal cortex. Results of correlation analyses were consistent with the idea that young adults had greater trial-by-trial agenda-driven modulation of activity (i.e., greater selectivity) than did older adults in representational regions. Thus, under selective remembering conditions where older adults showed clear differential regional activity in representational areas depending on type of test, they also showed evidence of disrupted frontal and parietal function and reduced item-by-item modulation of test-appropriate features. This pattern of results is consistent with an age-related deficit in the engagement of selective reflective attention. PMID:23357375
Veselis, Robert A; Pryor, Kane O; Reinsel, Ruth A; Li, Yuelin; Mehta, Meghana; Johnson, Ray
2009-02-01
Intravenous drugs active via gamma-aminobutyric acid receptors to produce memory impairment during conscious sedation. Memory function was assessed using event-related potentials (ERPs) while drug was present. The continuous recognition task measured recognition of photographs from working (6 s) and long-term (27 s) memory while ERPs were recorded from Cz (familiarity recognition) and Pz electrodes (recollection recognition). Volunteer participants received sequential doses of one of placebo (n = 11), 0.45 and 0.9 microg/ml propofol (n = 10), 20 and 40 ng/ml midazolam (n = 12), 1.5 and 3 microg/ml thiopental (n = 11), or 0.25 and 0.4 ng/ml dexmedetomidine (n = 11). End-of-day yes/no recognition 225 min after the end of drug infusion tested memory retention of pictures encoded on the continuous recognition tasks. Active drugs increased reaction times and impaired memory on the continuous recognition task equally, except for a greater effect of midazolam (P < 0.04). Forgetting from continuous recognition tasks to end of day was similar for all drugs (P = 0.40), greater than placebo (P < 0.001). Propofol and midazolam decreased the area between first presentation (new) and recognized (old, 27 s later) ERP waveforms from long-term memory for familiarity (P = 0.03) and possibly for recollection processes (P = 0.12). Propofol shifted ERP amplitudes to smaller voltages (P < 0.002). Dexmedetomidine may have impaired familiarity more than recollection processes (P = 0.10). Thiopental had no effect on ERPs. Propofol and midazolam impaired recognition ERPs from long-term memory but not working memory. ERP measures of memory revealed different pathways to end-of-day memory loss as early as 27 s after encoding.
Cognitive aspects of haptic form recognition by blind and sighted subjects.
Bailes, S M; Lambert, R M
1986-11-01
Studies using haptic form recognition tasks have generally concluded that the adventitiously blind perform better than the congenitally blind, implicating the importance of early visual experience in improved spatial functioning. The hypothesis was tested that the adventitiously blind have retained some ability to encode successive information obtained haptically in terms of a global visual representation, while the congenitally blind use a coding system based on successive inputs. Eighteen blind (adventitiously and congenitally) and 18 sighted (blindfolded and performing with vision) subjects were tested on their recognition of raised line patterns when the standard was presented in segments: in immediate succession, or with unfilled intersegmental delays of 5, 10, or 15 seconds. The results did not support the above hypothesis. Three main findings were obtained: normally sighted subjects were both faster and more accurate than the other groups; all groups improved in accuracy of recognition as a function of length of interstimulus interval; sighted subjects tended to report using strategies with a strong verbal component while the blind tended to rely on imagery coding. These results are explained in terms of information-processing theory consistent with dual encoding systems in working memory.
Hemispheric asymmetries of a motor memory in a recognition test after learning a movement sequence.
Leinen, Peter; Panzer, Stefan; Shea, Charles H
2016-11-01
Two experiments utilizing a spatial-temporal movement sequence were designed to determine if the memory of the sequence is lateralized in the left or right hemisphere. In Experiment 1, dominant right-handers were randomly assigned to one of two acquisition groups: a left-hand starter and a right-hand starter group. After an acquisition phase, reaction time (RT) was measured in a recognition test by providing the learned sequential pattern in the left or right visual half-field for 150ms. In a retention test and two transfer tests the dominant coordinate system for sequence production was evaluated. In Experiment 2 dominant left-handers and dominant right-handers had to acquire the sequence with their dominant limb. The results of Experiment 1 indicated that RT was significantly shorter when the acquired sequence was provided in the right visual field during the recognition test. The same results occurred in Experiment 2 for dominant right-handers and left-handers. These results indicated a right visual field left hemisphere advantage in the recognition test for the practiced stimulus for dominant left and right-handers, when the task was practiced with the dominant limb. Copyright © 2016 Elsevier B.V. All rights reserved.
Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models
Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.
2015-01-01
It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497
Relational and item-specific influences on generate-recognize processes in recall.
Guynn, Melissa J; McDaniel, Mark A; Strosser, Garrett L; Ramirez, Juan M; Castleberry, Erica H; Arnett, Kristen H
2014-02-01
The generate-recognize model and the relational-item-specific distinction are two approaches to explaining recall. In this study, we consider the two approaches in concert. Following Jacoby and Hollingshead (Journal of Memory and Language 29:433-454, 1990), we implemented a production task and a recognition task following production (1) to evaluate whether generation and recognition components were evident in cued recall and (2) to gauge the effects of relational and item-specific processing on these components. An encoding task designed to augment item-specific processing (anagram-transposition) produced a benefit on the recognition component (Experiments 1-3) but no significant benefit on the generation component (Experiments 1-3), in the context of a significant benefit to cued recall. By contrast, an encoding task designed to augment relational processing (category-sorting) did produce a benefit on the generation component (Experiment 3). These results converge on the idea that in recall, item-specific processing impacts a recognition component, whereas relational processing impacts a generation component.
ERIC Educational Resources Information Center
Unsworth, Nash; Brewer, Gene A.
2009-01-01
The authors of the current study examined the relationships among item-recognition, source-recognition, free recall, and other memory and cognitive ability tasks via an individual differences analysis. Two independent sources of variance contributed to item-recognition and source-recognition performance, and these two constructs related…
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access. PMID:27458400
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
Retrieval orientation and the control of recollection: an fMRI study.
Morcom, Alexa M; Rugg, Michael D
2012-12-01
This study used event-related fMRI to examine the impact of the adoption of different retrieval orientations on the neural correlates of recollection. In each of two study-test blocks, participants encoded a mixed list of words and pictures and then performed a recognition memory task with words as the test items. In one block, the requirement was to respond positively to test items corresponding to studied words and to reject both new items and items corresponding to the studied pictures. In the other block, positive responses were made to test items corresponding to pictures, and items corresponding to words were classified along with the new items. On the basis of previous ERP findings, we predicted that in the word task, recollection-related effects would be found for target information only. This prediction was fulfilled. In both tasks, targets elicited the characteristic pattern of recollection-related activity. By contrast, nontargets elicited this pattern in the picture task, but not in the word task. Importantly, the left angular gyrus was among the regions demonstrating this dissociation of nontarget recollection effects according to retrieval orientation. The findings for the angular gyrus parallel prior findings for the "left-parietal" ERP old/new effect and add to the evidence that the effect reflects recollection-related neural activity originating in left ventral parietal cortex. Thus, the results converge with the previous ERP findings to suggest that the processing of retrieval cues can be constrained to prevent the retrieval of goal-irrelevant information.
Crowd Sourcing Data Collection through Amazon Mechanical Turk
2013-09-01
The first recognition study consisted of a Panel Study using a simple detection protocol, in which participants were presented with vignettes and, for...variability than the crowdsourcing data set, hewing more closely to the year 1 verbs of interest and simple description grammar . The DT:PS data were...Study RT: PS Recognition Task: Panel Study RT: RT Recognition Task: Round Table S3 Amazon Simple Storage Service SVPA Single Verb Present /Absent
The own-age face recognition bias is task dependent.
Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J
2015-08-01
The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.
Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J
2011-04-01
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Pires, Ivan Miguel; Garcia, Nuno M; Pombo, Nuno; Flórez-Revuelta, Francisco; Spinsante, Susanna
2018-02-21
Sensors available on mobile devices allow the automatic identification of Activities of Daily Living (ADL). This paper describes an approach for the creation of a framework for the identification of ADL, taking into account several concepts, including data acquisition, data processing, data fusion, and pattern recognition. These concepts can be mapped onto different modules of the framework. The proposed framework should perform the identification of ADL without Internet connection, performing these tasks locally on the mobile device, taking in account the hardware and software limitations of these devices. The main purpose of this paper is to present a new approach for the creation of a framework for the recognition of ADL, analyzing the allowed sensors available in the mobile devices, and the existing methods available in the literature.
A neuroimaging study of conflict during word recognition.
Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F
2010-08-04
Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia
Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.
2011-01-01
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662
Pombo, Nuno
2018-01-01
Sensors available on mobile devices allow the automatic identification of Activities of Daily Living (ADL). This paper describes an approach for the creation of a framework for the identification of ADL, taking into account several concepts, including data acquisition, data processing, data fusion, and pattern recognition. These concepts can be mapped onto different modules of the framework. The proposed framework should perform the identification of ADL without Internet connection, performing these tasks locally on the mobile device, taking in account the hardware and software limitations of these devices. The main purpose of this paper is to present a new approach for the creation of a framework for the recognition of ADL, analyzing the allowed sensors available in the mobile devices, and the existing methods available in the literature. PMID:29466316
Preti, Emanuele; Richetin, Juliette; Suttora, Chiara; Pisani, Alberto
2016-04-30
Dysfunctions in social cognition characterize personality disorders. However, mixed results emerged from literature on emotion processing. Borderline Personality Disorder (BPD) traits are either associated with enhanced emotion recognition, impairments, or equal functioning compared to controls. These apparent contradictions might result from the complexity of emotion recognition tasks used and from individual differences in impulsivity and effortful control. We conducted a study in a sample of undergraduate students (n=80), assessing BPD traits, using an emotion recognition task that requires the processing of only visual information or both visual and acoustic information. We also measured individual differences in impulsivity and effortful control. Results demonstrated the moderating role of some components of impulsivity and effortful control on the capability of BPD traits in predicting anger and happiness recognition. We organized the discussion around the interaction between different components of regulatory functioning and task complexity for a better understanding of emotion recognition in BPD samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
Seemüller, Anna; Fiehler, Katja; Rösler, Frank
2011-01-01
The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.
Application of advanced speech technology in manned penetration bombers
NASA Astrophysics Data System (ADS)
North, R.; Lea, W.
1982-03-01
This report documents research on the potential use of speech technology in a manned penetration bomber aircraft (B-52/G and H). The objectives of the project were to analyze the pilot/copilot crewstation tasks over a three-hour-and forty-minute mission and determine the tasks that would benefit the most from conversion to speech recognition/generation, determine the technological feasibility of each of the identified tasks, and prioritize these tasks based on these criteria. Secondary objectives of the program were to enunciate research strategies in the application of speech technologies in airborne environments, and develop guidelines for briefing user commands on the potential of using speech technologies in the cockpit. The results of this study indicated that for the B-52 crewmember, speech recognition would be most beneficial for retrieving chart and procedural data that is contained in the flight manuals. Technological feasibility of these tasks indicated that the checklist and procedural retrieval tasks would be highly feasible for a speech recognition system.
Guardia-Olmos, Joan; Zarabozo-Hurtado, Daniel; Peró-Cebollero, Maribe; Gudayol-Farré, Esteban; Gómez-Velázquez, Fabiola R; González-Garrido, Andrés
2017-12-04
The study of orthographic errors in a transparent language such as Spanish is an important topic in relation to writing acquisition because in Spanish it is common to write pseudohomophones as valid words. The main objective of the present study was to explore the possible differences in activation patterns in brain areas while processing pseudohomophone orthographic errors between participants with high (High Spelling Skills (HSS)) and low (Low Spelling Skills (LSS)) spelling orthographic abilities. We hypothesize that (a) the detection of orthographic errors will activate bilateral inferior frontal gyri, and that (b) this effect will be greater in the HSS group. Two groups of 12 Mexican participants, each matched by age, were formed based on their results in a group of spelling-related ad hoc tests: HSS and LSS groups. During the fMRI session, two experimental tasks were applied involving correct and pseudohomophone substitution of Spanish words. First, a spelling recognition task and second a letter searching task. The LSS group showed, as expected, a lower number of correct responses (F(1, 21) = 52.72, p <.001, η2 = .715) and higher reaction times compared to the HSS group for the spelling task (F(1, 21) = 60.03, p <.001, η2 = .741). However, this pattern was reversed when the participants were asked to decide on the presence of a vowel in the words, regardless of spelling. The fMRI data showed an engagement of the right inferior frontal gyrus in HSS group during the spelling task. However, temporal, frontal, and subcortical brain regions of the LSS group were activated during the same task.