Sample records for perception recognition learning

  1. Sensory, Cognitive, and Sensorimotor Learning Effects in Recognition Memory for Music.

    PubMed

    Mathias, Brian; Tillmann, Barbara; Palmer, Caroline

    2016-08-01

    Recent research suggests that perception and action are strongly interrelated and that motor experience may aid memory recognition. We investigated the role of motor experience in auditory memory recognition processes by musicians using behavioral, ERP, and neural source current density measures. Skilled pianists learned one set of novel melodies by producing them and another set by perception only. Pianists then completed an auditory memory recognition test during which the previously learned melodies were presented with or without an out-of-key pitch alteration while the EEG was recorded. Pianists indicated whether each melody was altered from or identical to one of the original melodies. Altered pitches elicited a larger N2 ERP component than original pitches, and pitches within previously produced melodies elicited a larger N2 than pitches in previously perceived melodies. Cortical motor planning regions were more strongly activated within the time frame of the N2 following altered pitches in previously produced melodies compared with previously perceived melodies, and larger N2 amplitudes were associated with greater detection accuracy following production learning than perception learning. Early sensory (N1) and later cognitive (P3a) components elicited by pitch alterations correlated with predictions of sensory echoic and schematic tonality models, respectively, but only for the perception learning condition, suggesting that production experience alters the extent to which performers rely on sensory and tonal recognition cues. These findings provide evidence for distinct time courses of sensory, schematic, and motoric influences within the same recognition task and suggest that learned auditory-motor associations influence responses to out-of-key pitches.

  2. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations.

    PubMed

    Butler, Andrew J; James, Thomas W; James, Karin Harman

    2011-11-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.

  3. Computational validation of the motor contribution to speech perception.

    PubMed

    Badino, Leonardo; D'Ausilio, Alessandro; Fadiga, Luciano; Metta, Giorgio

    2014-07-01

    Action perception and recognition are core abilities fundamental for human social interaction. A parieto-frontal network (the mirror neuron system) matches visually presented biological motion information onto observers' motor representations. This process of matching the actions of others onto our own sensorimotor repertoire is thought to be important for action recognition, providing a non-mediated "motor perception" based on a bidirectional flow of information along the mirror parieto-frontal circuits. State-of-the-art machine learning strategies for hand action identification have shown better performances when sensorimotor data, as opposed to visual information only, are available during learning. As speech is a particular type of action (with acoustic targets), it is expected to activate a mirror neuron mechanism. Indeed, in speech perception, motor centers have been shown to be causally involved in the discrimination of speech sounds. In this paper, we review recent neurophysiological and machine learning-based studies showing (a) the specific contribution of the motor system to speech perception and (b) that automatic phone recognition is significantly improved when motor data are used during training of classifiers (as opposed to learning from purely auditory data). Copyright © 2014 Cognitive Science Society, Inc.

  4. Speech Recognition Software for Language Learning: Toward an Evaluation of Validity and Student Perceptions

    ERIC Educational Resources Information Center

    Cordier, Deborah

    2009-01-01

    A renewed focus on foreign language (FL) learning and speech for communication has resulted in computer-assisted language learning (CALL) software developed with Automatic Speech Recognition (ASR). ASR features for FL pronunciation (Lafford, 2004) are functional components of CALL designs used for FL teaching and learning. The ASR features…

  5. Recognition, Accreditation and Validation of Non-Formal and Informal Learning: Prospects for Lifelong Learning in Nepal

    ERIC Educational Resources Information Center

    Regmi, Kapil Dev

    2009-01-01

    This study was an exploration on the various issues related to recognition, accreditation and validation of non-formal and informal learning to open up avenues for lifelong learning and continuing education in Nepal. The perceptions, experiences, and opinions of Nepalese Development Activists, Educational Administrators, Policy Actors and…

  6. Human-assisted sound event recognition for home service robots.

    PubMed

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  7. Assessment and Recognition of Non-Formal and Informal Learning: A Lithuanian Case of Novice Consultants' Experience

    ERIC Educational Resources Information Center

    Burkšaitiene, Nijole

    2015-01-01

    This article reports the results of the investigation into institutional support provided to adults by 12 novice consultants on assessment and recognition of their non-formal and informal learning in four institutions of higher education (HEIs) in Lithuania. Using the general systems perspective and perception theory, novice consultants'…

  8. Perceptual learning of degraded speech by minimizing prediction error.

    PubMed

    Sohoglu, Ediz; Davis, Matthew H

    2016-03-22

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.

  9. Perceptual learning of degraded speech by minimizing prediction error

    PubMed Central

    Sohoglu, Ediz

    2016-01-01

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech. PMID:26957596

  10. Motion Based Target Acquisition and Evaluation in an Adaptive Machine Vision System

    DTIC Science & Technology

    1995-05-01

    paths in facial recognition and learning. Annals of Neurology, 22, 41-45. Tolman, E.C. (1932) Purposive behavior in Animals and Men. New York: Appleton...Learned scan paths are the active processes of perception. Rizzo et al. (1987) studied the fixation patterns of two patients with impaired facial ... recognition and learning and found an increase in the randomness of the scan patterns compared to controls, indicating that the cortex was failing to direct

  11. Learning in shifts of transient attention improves recognition of parts of ambiguous figure-ground displays.

    PubMed

    Kristjánsson, Arni

    2009-04-24

    Previously demonstrated learning effects in shifts of transient attention have only been shown to result in beneficial effects upon secondary discrimination tasks and affect landing points of express saccades. Can such learning result in more direct effects upon perception than previously demonstrated? Observers performed a cued Vernier acuity discrimination task where the cue was one of a set of ambiguous figure-ground displays (with a black and white part). The critical measure was whether, if a target appeared consistently within a part of a cue of a certain brightness, this would result in learning effects and whether such learning would then affect recognition of the cue parts. Critically the target always appeared within the same part of each individual cue. Some cues were used in early parts of streaks of repetition of cue-part brightness, and others in latter parts of such streaks. All the observers showed learning in shifts of transient attention, with improved performance the more often the target appeared within the part of the cue of the same brightness. Subsequently the observers judged whether cue-parts had been parts of the cues used on the preceding discrimination task. Recognition of the figure parts, where the target had consistently appeared, improved strongly with increased length of streaks of repetition of cue-part brightness. Learning in shifts of transient attention leads not only to faster attention shifts but to direct effects upon perception, in this case recognition of parts of figure-ground ambiguous cues.

  12. Learning to perceive and recognize a second language: the L2LP model revised.

    PubMed

    van Leussen, Jan-Willem; Escudero, Paola

    2015-01-01

    We present a test of a revised version of the Second Language Linguistic Perception (L2LP) model, a computational model of the acquisition of second language (L2) speech perception and recognition. The model draws on phonetic, phonological, and psycholinguistic constructs to explain a number of L2 learning scenarios. However, a recent computational implementation failed to validate a theoretical proposal for a learning scenario where the L2 has less phonemic categories than the native language (L1) along a given acoustic continuum. According to the L2LP, learners faced with this learning scenario must not only shift their old L1 phoneme boundaries but also reduce the number of categories employed in perception. Our proposed revision to L2LP successfully accounts for this updating in the number of perceptual categories as a process driven by the meaning of lexical items, rather than by the learners' awareness of the number and type of phonemes that are relevant in their new language, as the previous version of L2LP assumed. Results of our simulations show that meaning-driven learning correctly predicts the developmental path of L2 phoneme perception seen in empirical studies. Additionally, and to contribute to a long-standing debate in psycholinguistics, we test two versions of the model, with the stages of phonemic perception and lexical recognition being either sequential or interactive. Both versions succeed in learning to recognize minimal pairs in the new L2, but make diverging predictions on learners' resulting phonological representations. In sum, the proposed revision to the L2LP model contributes to our understanding of L2 acquisition, with implications for speech processing in general.

  13. Homeschooling Parent/Teachers' Perceptions on Educating Struggling High School Students and their College Readiness

    ERIC Educational Resources Information Center

    McCullough, Brenda Tracy

    2013-01-01

    A general problem is that testing a homeschooled child for learning disabilities (LD) is not required in the state of Texas and therefore dependent on the homeschooling parent's recognition and desire to test. A qualitative exploratory method was used to determine the perceptions of parent/teachers on their struggling high school students'…

  14. Liberated Learning: Analysis of University Students' Perceptions and Experiences with Continuous Automated Speech Recognition

    ERIC Educational Resources Information Center

    Ryba, Ken; McIvor, Tom; Shakir, Maha; Paez, Di

    2006-01-01

    This study examined continuous automated speech recognition in the university lecture theatre. The participants were both native speakers of English (L1) and English as a second language students (L2) enrolled in an information systems course (Total N=160). After an initial training period, an L2 lecturer in information systems delivered three…

  15. Character Recognition Using Novel Optoelectronic Neural Network

    DTIC Science & Technology

    1993-04-01

    interest will include machine learning and perception. Permanent Address: William M. Robinson c/o Dave and Judy Bartine 117 Westcliff Drive Harriman, TN 37748 This thesis was typed by William M. Robinson. 190 END

  16. Mirror representations innate versus determined by experience: a viewpoint from learning theory.

    PubMed

    Giese, Martin A

    2014-04-01

    From the viewpoint of pattern recognition and computational learning, mirror neurons form an interesting multimodal representation that links action perception and planning. While it seems unlikely that all details of such representations are specified by the genetic code, robust learning of such complex representations likely requires an appropriate interplay between plasticity, generalization, and anatomical constraints of the underlying neural architecture.

  17. Evaluation of the benefits of assistive reading software: perceptions of high school students with learning disabilities.

    PubMed

    Chiang, Hsin-Yu; Liu, Chien-Hsiou

    2011-01-01

    Using assistive reading software may be a cost-effective way to increase the opportunity for independent learning in students with learning disabilities. However, the effectiveness and perception of assistive reading software has seldom been explored in English-as-a-second language students with learning disabilities. This research was designed to explore the perception and effect of using assistive reading software in high school students with dyslexia (one subtype of learning disability) to improve their English reading and other school performance. The Kurzweil 3000 software was used as the intervention tool in this study. Fifteen students with learning disabilities were recruited, and instruction in the usage of the Kurzweil 3000 was given. Then after 2 weeks, when they were familiarized with the use of Kurzweil 3000, interviews were used to determine the perception and potential benefit of using the software. The results suggested that the Kurzweil 3000 had an immediate impact on students' English word recognition. The students reported that the software made reading, writing, spelling, and pronouncing easier. They also comprehended more during their English class. Further study is needed to determine under which conditions certain hardware/software might be helpful for individuals with special learning needs.

  18. Patterns recognition of electric brain activity using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  19. The Role of Teachers' Classroom Discipline in Their Teaching Effectiveness and Students' Language Learning Motivation and Achievement: A Path Method

    ERIC Educational Resources Information Center

    Rahimi, Mehrak; Karkami, Fatemeh Hosseini

    2015-01-01

    This study investigated the role of EFL teachers' classroom discipline strategies in their teaching effectiveness and their students' motivation and achievement in learning English as a foreign language. 1408 junior high-school students expressed their perceptions of the strategies their English teachers used (punishment, recognition/reward,…

  20. Learning L2 Pronunciation with a Mobile Speech Recognizer: French /y/

    ERIC Educational Resources Information Center

    Liakin, Denis; Cardoso, Walcir; Liakina, Natallia

    2015-01-01

    This study investigates the acquisition of the L2 French vowel /y/ in a mobile-assisted learning environment, via the use of automatic speech recognition (ASR). Particularly, it addresses the question of whether ASR-based pronunciation instruction using a mobile device can improve the production and perception of French /y/. Forty-two elementary…

  1. Effects of intelligibility on working memory demand for speech perception.

    PubMed

    Francis, Alexander L; Nusbaum, Howard C

    2009-08-01

    Understanding low-intelligibility speech is effortful. In three experiments, we examined the effects of intelligibility on working memory (WM) demands imposed by perception of synthetic speech. In all three experiments, a primary speeded word recognition task was paired with a secondary WM-load task designed to vary the availability of WM capacity during speech perception. Speech intelligibility was varied either by training listeners to use available acoustic cues in a more diagnostic manner (as in Experiment 1) or by providing listeners with more informative acoustic cues (i.e., better speech quality, as in Experiments 2 and 3). In the first experiment, training significantly improved intelligibility and recognition speed; increasing WM load significantly slowed recognition. A significant interaction between training and load indicated that the benefit of training on recognition speed was observed only under low memory load. In subsequent experiments, listeners received no training; intelligibility was manipulated by changing synthesizers. Improving intelligibility without training improved recognition accuracy, and increasing memory load still decreased it, but more intelligible speech did not produce more efficient use of available WM capacity. This suggests that perceptual learning modifies the way available capacity is used, perhaps by increasing the use of more phonetically informative features and/or by decreasing use of less informative ones.

  2. Neurocomputational account of memory and perception: Thresholded and graded signals in the hippocampus.

    PubMed

    Elfman, Kane W; Aly, Mariam; Yonelinas, Andrew P

    2014-12-01

    Recent evidence suggests that the hippocampus, a region critical for long-term memory, also supports certain forms of high-level visual perception. A seemingly paradoxical finding is that, unlike the thresholded hippocampal signals associated with memory, the hippocampus produces graded, strength-based signals in perception. This article tests a neurocomputational model of the hippocampus, based on the complementary learning systems framework, to determine if the same model can account for both memory and perception, and whether it produces the appropriate thresholded and strength-based signals in these two types of tasks. The simulations showed that the hippocampus, and most prominently the CA1 subfield, produced graded signals when required to discriminate between highly similar stimuli in a perception task, but generated thresholded patterns of activity in recognition memory. A threshold was observed in recognition memory because pattern completion occurred for only some trials and completely failed to occur for others; conversely, in perception, pattern completion always occurred because of the high degree of item similarity. These results offer a neurocomputational account of the distinct hippocampal signals associated with perception and memory, and are broadly consistent with proposals that CA1 functions as a comparator of expected versus perceived events. We conclude that the hippocampal computations required for high-level perceptual discrimination are congruous with current neurocomputational models that account for recognition memory, and fit neatly into a broader description of the role of the hippocampus for the processing of complex relational information. © 2014 Wiley Periodicals, Inc.

  3. Speech perception and spoken word recognition: past and present.

    PubMed

    Jusezyk, Peter W; Luce, Paul A

    2002-02-01

    The scientific study of the perception of spoken language has been an exciting, prolific, and productive area of research for more than 50 yr. We have learned much about infants' and adults' remarkable capacities for perceiving and understanding the sounds of their language, as evidenced by our increasingly sophisticated theories of acquisition, process, and representation. We present a selective, but we hope, representative review of the past half century of research on speech perception, paying particular attention to the historical and theoretical contexts within which this research was conducted. Our foci in this review fall on three principle topics: early work on the discrimination and categorization of speech sounds, more recent efforts to understand the processes and representations that subserve spoken word recognition, and research on how infants acquire the capacity to perceive their native language. Our intent is to provide the reader a sense of the progress our field has experienced over the last half century in understanding the human's extraordinary capacity for the perception of spoken language.

  4. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  5. Lexico-semantic and acoustic-phonetic processes in the perception of noise-vocoded speech: implications for cochlear implantation

    PubMed Central

    McGettigan, Carolyn; Rosen, Stuart; Scott, Sophie K.

    2014-01-01

    Noise-vocoding is a transformation which, when applied to speech, severely reduces spectral resolution and eliminates periodicity, yielding a stimulus that sounds “like a harsh whisper” (Scott et al., 2000, p. 2401). This process simulates a cochlear implant, where the activity of many thousand hair cells in the inner ear is replaced by direct stimulation of the auditory nerve by a small number of tonotopically-arranged electrodes. Although a cochlear implant offers a powerful means of restoring some degree of hearing to profoundly deaf individuals, the outcomes for spoken communication are highly variable (Moore and Shannon, 2009). Some variability may arise from differences in peripheral representation (e.g., the degree of residual nerve survival) but some may reflect differences in higher-order linguistic processing. In order to explore this possibility, we used noise-vocoding to explore speech recognition and perceptual learning in normal-hearing listeners tested across several levels of the linguistic hierarchy: segments (consonants and vowels), single words, and sentences. Listeners improved significantly on all tasks across two test sessions. In the first session, individual differences analyses revealed two independently varying sources of variability: one lexico-semantic in nature and implicating the recognition of words and sentences, and the other an acoustic-phonetic factor associated with words and segments. However, consequent to learning, by the second session there was a more uniform covariance pattern concerning all stimulus types. A further analysis of phonetic feature recognition allowed greater insight into learning-related changes in perception and showed that, surprisingly, participants did not make full use of cues that were preserved in the stimuli (e.g., vowel duration). We discuss these findings in relation cochlear implantation, and suggest auditory training strategies to maximize speech recognition performance in the absence of typical cues. PMID:24616669

  6. Recognition of Cross-Cultural Meaning When Developing Online Web Displays.

    ERIC Educational Resources Information Center

    Brown, Ian; Hedberg, John

    The perceptions and practical experiences are important influences when creating and developing online learning experiences in cross cultural contexts. In this study, 15 educational designers studying for their Master's Degree were asked to contribute their interpretations to an ongoing study of what meaning and interpretations were generated from…

  7. Novel insights into the ontogeny of nestmate recognition in Polistes social wasps.

    PubMed

    Signorotti, Lisa; Cappa, Federico; d'Ettorre, Patrizia; Cervo, Rita

    2014-01-01

    The importance of early experience in animals' life is unquestionable, and imprinting-like phenomena may shape important aspects of behaviour. Early learning typically occurs during a sensitive period, which restricts crucial processes of information storage to a specific developmental phase. The characteristics of the sensitive period have been largely investigated in vertebrates, because of their complexity and plasticity, both in behaviour and neurophysiology, but early learning occurs also in invertebrates. In social insects, early learning appears to influence important social behaviours such as nestmate recognition. Yet, the mechanisms underlying recognition systems are not fully understood. It is currently believed that Polistes social wasps are able to discriminate nestmates from non-nestmates following the perception of olfactory cues present on the paper of their nest, which are learned during a strict sensitive period, immediately after emergence. Here, through differential odour experience experiments, we show that workers of Polistes dominula develop correct nestmate recognition abilities soon after emergence even in absence of what have been so far considered the necessary cues (the chemicals spread on nest paper). P. dominula workers were exposed for the first four days of adult life to paper fragments from their nest, or from a foreign conspecific nest or to a neutral condition. Wasps were then transferred to their original nests where recognition abilities were tested. Our results show that wasps do not alter their recognition ability if exposed only to nest material, or in absence of nest material, during the early phase of adult life. It thus appears that the nest paper is not used as a source of recognition cues to be learned in a specific time window, although we discuss possible alternative explanations. Our study provides a novel perspective for the study of the ontogeny of nestmate recognition in Polistes wasps and in other social insects.

  8. The roles of perceptual and conceptual information in face recognition.

    PubMed

    Schwartz, Linoy; Yovel, Galit

    2016-11-01

    The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Implicit multisensory associations influence voice recognition.

    PubMed

    von Kriegstein, Katharina; Giraud, Anne-Lise

    2006-10-01

    Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.

  10. Social cognition in schizophrenia and healthy aging: differences and similarities.

    PubMed

    Silver, Henry; Bilker, Warren B

    2014-12-01

    Social cognition is impaired in schizophrenia but it is not clear whether this is specific for the illness and whether emotion perception is selectively affected. To study this we examined the perception of emotional and non-emotional clues in facial expressions, a key social cognitive skill, in schizophrenia patients and old healthy individuals using young healthy individuals as reference. Tests of object recognition, visual orientation, psychomotor speed, and working memory were included to allow multivariate analysis taking into account other cognitive functions Schizophrenia patients showed impairments in recognition of identity and emotional facial clues compared to young and old healthy groups. Severity was similar to that for object recognition and visuospatial processing. Older and younger healthy groups did not differ from each other on these tests. Schizophrenia patients and old healthy individuals were similarly impaired in the ability to automatically learn new faces during the testing procedure (measured by the CSTFAC index) compared to young healthy individuals. Social cognition is distinctly impaired in schizophrenia compared to healthy aging. Further study is needed to identify the mechanisms of automatic social cognitive learning impairment in schizophrenia patients and healthy aging individuals and determine whether similar neural systems are affected. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. [Music therapy in adults with cochlear implants : Effects on music perception and subjective sound quality].

    PubMed

    Hutter, E; Grapp, M; Argstatter, H

    2016-12-01

    People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.

  12. Educational Technology and Student Voice: Examining Teacher Candidates' Perceptions

    ERIC Educational Resources Information Center

    Byker, Erik Jon; Putman, S. Michael; Handler, Laura; Polly, Drew

    2017-01-01

    Student Voice is a term that honors the participatory roles that students have when they enter learning spaces like classrooms. Student Voice is the recognition of students' choice, creativity, and freedom. Seminal educationists--like Dewey and Montessori--centered the purposes of education in the flourishing and valuing of Student Voice. This…

  13. Discussion: Changes in Vocal Production and Auditory Perception after Hair Cell Regeneration.

    ERIC Educational Resources Information Center

    Ryals, Brenda M.; Dooling, Robert J.

    2000-01-01

    A bird study found that with sufficient time and training after hair cell and hearing loss and hair cell regeneration, the mature avian auditory system can accommodate input from a newly regenerated periphery sufficiently to allow for recognition of previously familiar vocalizations and the learning of new complex acoustic classifications.…

  14. The Role of Somatosensory Information in Speech Perception: Imitation Improves Recognition of Disordered Speech

    ERIC Educational Resources Information Center

    Borrie, Stephanie A.; Schäfer, Martina C. M.

    2015-01-01

    Purpose: Perceptual learning paradigms involving written feedback appear to be a viable clinical tool to reduce the intelligibility burden of dysarthria. The underlying theoretical assumption is that pairing the degraded acoustics with the intended lexical targets facilitates a remapping of existing mental representations in the lexicon. This…

  15. Modern Languages in the United Kingdom

    ERIC Educational Resources Information Center

    Coleman, James A.

    2011-01-01

    The article supplies an overview of UK modern languages education at school and university level. It attends particularly to trends over recent years, with regard both to numbers and to social elitism, and reflects on perceptions of language learning in the wider culture and the importance of gaining wider recognition of the value of languages…

  16. A Transfer-in-Pieces Consideration of the Perception of Structure in the Transfer of Learning

    ERIC Educational Resources Information Center

    Wagner, Joseph F.

    2010-01-01

    Many approaches to the transfer problem argue that transfer depends on the recognition of the same or similar abstract "structure" in 2 different situations. However, mainstream cognitive perspectives and contrasting Piagetian constructivist accounts differ in their conceptualizations of structure. These differences, not clearly articulated in the…

  17. Implicit Multisensory Associations Influence Voice Recognition

    PubMed Central

    von Kriegstein, Katharina; Giraud, Anne-Lise

    2006-01-01

    Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules. PMID:17002519

  18. The Colorado Learning Attitudes about Science Survey (CLASS) for use in Biology.

    PubMed

    Semsar, Katharine; Knight, Jennifer K; Birol, Gülnur; Smith, Michelle K

    2011-01-01

    This paper describes a newly adapted instrument for measuring novice-to-expert-like perceptions about biology: the Colorado Learning Attitudes about Science Survey for Biology (CLASS-Bio). Consisting of 31 Likert-scale statements, CLASS-Bio probes a range of perceptions that vary between experts and novices, including enjoyment of the discipline, propensity to make connections to the real world, recognition of conceptual connections underlying knowledge, and problem-solving strategies. CLASS-Bio has been tested for response validity with both undergraduate students and experts (biology PhDs), allowing student responses to be directly compared with a consensus expert response. Use of CLASS-Bio to date suggests that introductory biology courses have the same challenges as introductory physics and chemistry courses: namely, students shift toward more novice-like perceptions following instruction. However, students in upper-division biology courses do not show the same novice-like shifts. CLASS-Bio can also be paired with other assessments to: 1) examine how student perceptions impact learning and conceptual understanding of biology, and 2) assess and evaluate how pedagogical techniques help students develop both expertise in problem solving and an expert-like appreciation of the nature of biology.

  19. The Colorado Learning Attitudes about Science Survey (CLASS) for Use in Biology

    PubMed Central

    Semsar, Katharine; Knight, Jennifer K.; Birol, Gülnur; Smith, Michelle K.

    2011-01-01

    This paper describes a newly adapted instrument for measuring novice-to-expert-like perceptions about biology: the Colorado Learning Attitudes about Science Survey for Biology (CLASS-Bio). Consisting of 31 Likert-scale statements, CLASS-Bio probes a range of perceptions that vary between experts and novices, including enjoyment of the discipline, propensity to make connections to the real world, recognition of conceptual connections underlying knowledge, and problem-solving strategies. CLASS-Bio has been tested for response validity with both undergraduate students and experts (biology PhDs), allowing student responses to be directly compared with a consensus expert response. Use of CLASS-Bio to date suggests that introductory biology courses have the same challenges as introductory physics and chemistry courses: namely, students shift toward more novice-like perceptions following instruction. However, students in upper-division biology courses do not show the same novice-like shifts. CLASS-Bio can also be paired with other assessments to: 1) examine how student perceptions impact learning and conceptual understanding of biology, and 2) assess and evaluate how pedagogical techniques help students develop both expertise in problem solving and an expert-like appreciation of the nature of biology. PMID:21885823

  20. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  1. From brain synapses to systems for learning and memory: Object recognition, spatial navigation, timed conditioning, and movement control.

    PubMed

    Grossberg, Stephen

    2015-09-24

    This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  3. The Initial Development of Object Knowledge by a Learning Robot

    PubMed Central

    Modayil, Joseph; Kuipers, Benjamin

    2008-01-01

    We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control. PMID:19953188

  4. The Influence of Bilingualism on the Preference for the Mouth Region of Dynamic Faces

    ERIC Educational Resources Information Center

    Ayneto, Alba; Sebastian-Galles, Nuria

    2017-01-01

    Bilingual infants show an extended period of looking at the mouth of talking faces, which provides them with additional articulatory cues that can be used to boost the challenging situation of learning two languages (Pons, Bosch & Lewkowicz, 2015). However, the eye region also provides fundamental cues for emotion perception and recognition,…

  5. Sensori-motor experience leads to changes in visual processing in the developing brain.

    PubMed

    James, Karin Harman

    2010-03-01

    Since Broca's studies on language processing, cortical functional specialization has been considered to be integral to efficient neural processing. A fundamental question in cognitive neuroscience concerns the type of learning that is required for functional specialization to develop. To address this issue with respect to the development of neural specialization for letters, we used functional magnetic resonance imaging (fMRI) to compare brain activation patterns in pre-school children before and after different letter-learning conditions: a sensori-motor group practised printing letters during the learning phase, while the control group practised visual recognition. Results demonstrated an overall left-hemisphere bias for processing letters in these pre-literate participants, but, more interestingly, showed enhanced blood oxygen-level-dependent activation in the visual association cortex during letter perception only after sensori-motor (printing) learning. It is concluded that sensori-motor experience augments processing in the visual system of pre-school children. The change of activation in these neural circuits provides important evidence that 'learning-by-doing' can lay the foundation for, and potentially strengthen, the neural systems used for visual letter recognition.

  6. Mutual information, perceptual independence, and holistic face perception.

    PubMed

    Fitousi, Daniel

    2013-07-01

    The concept of perceptual independence is ubiquitous in psychology. It addresses the question of whether two (or more) dimensions are perceived independently. Several authors have proposed perceptual independence (or its lack thereof) as a viable measure of holistic face perception (Loftus, Oberg, & Dillon, Psychological Review 111:835-863, 2004; Wenger & Ingvalson, Learning, Memory, and Cognition 28:872-892, 2002). According to this notion, the processing of facial features occurs in an interactive manner. Here, I examine this idea from the perspective of two theories of perceptual independence: the multivariate uncertainty analysis (MUA; Garner & Morton, Definitions, models, and experimental paradigms. Psychological Bulletin 72:233-259, 1969), and the general recognition theory (GRT; Ashby & Townsend, Psychological Review 93:154-179, 1986). The goals of the study were to (1) introduce the MUA, (2) examine various possible relations between MUA and GRT using numerical simulations, and (3) apply the MUA to two consensual markers of holistic face perception(-)recognition of facial features (Farah, Wilson, Drain, & Tanaka, Psychological Review 105:482-498, 1998) and the composite face effect (Young, Hellawell, & Hay, Perception 16:747-759, 1987). The results suggest that facial holism is generated by violations of several types of perceptual independence. They highlight the important theoretical role played by converging operations in the study of holistic face perception.

  7. Impact of a learning circle intervention across academic and service contexts on developing a learning culture.

    PubMed

    Walker, Rachel; Henderson, Amanda; Cooke, Marie; Creedy, Debra

    2011-05-01

    Partnerships between university schools of nursing and health services lead to successful learning experiences for students and staff. A purposive sample of academics and students from a university school of nursing and clinicians from three health institutions involved in clinical learning (n=73) actively participated in a learning circles intervention conducted over 5 months in south east Queensland. Learning circle discussions resulted in enhanced communication and shared understanding regarding: (1) staff attitudes towards students, expectations and student assessment; (2) strategies enhancing preparation of students, mechanisms for greater support of and recognition of clinicians; (3) challenges faced by staff in the complex processes of leadership in clinical nursing education; (4) construction of learning, ideas for improving communication, networking and sharing; and (5) questioning routine practices that may not enhance student learning. Pre-post surveys of hospital staff (n=310) revealed significant differences across three sub-scales of 'accomplishment' (t=-3.98, p<.001), 'recognition' (t=-2.22, p<.027) and 'influence' (t=-11.82, p<.001) but not 'affiliation'. Learning circles can positively enhance organisational learning culture. The intervention enabled participants to recognise mutual goals. Further investigation around staff perception of their influence on their workplace is required. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  9. Furthering the understanding of olfaction, prevalence of loss of smell and risk factors: a population-based survey (OLFACAT study)

    PubMed Central

    Mullol, Joaquim; Alobid, Isam; Mariño-Sánchez, Franklin; Quintó, Llorenç; de Haro, Josep; Bernal-Sprekelsen, Manuel; Valero, Antonio; Picado, Cèsar; Marin, Concepció

    2012-01-01

    Objectives To investigate olfaction in general population, prevalence of olfactory dysfunction and related risk factors. Design Cross-sectional population-based survey, distributing four microencapsulated odorants (rose, banana, musk and gas) and two self-administered questionnaires (odour description; epidemiology/health status). Setting The survey was distributed to general population through a bilingual (Catalan, Spanish) newspaper in Catalonia (Spain), on December 2003. Participants Newspaper readers of all ages and gender; 9348 surveys were analysed from the 10 783 returned. Main outcome measures Characteristics of surveyed population, olfaction by age and gender, smell self-perception and smell impairment risk factors. Terms normosmia, hyposmia and anosmia were used when participants detected, recognised or identified all four, one to three or none of the odours, respectively. Results Survey profile was a 43-year-old woman with medium–high educational level, living in a city. Olfaction was considered normal in 80.6% (detection), 56% (recognition/memory) and 50.7% (identification). Prevalence of smell dysfunction was 19.4% for detection (0.3% anosmia, 19.1% hyposmia), 43.5% for recognition (0.2% anosmia, 43.3% hyposmia) and 48.8% for identification (0.8% anosmia, 48% hyposmia). Olfaction was worse (p<0.0001) in men than in women through all ages. There was a significant age-related smell detection decline however smell recognition and identification increased up to fourth decade and declined after the sixth decade of life. Risk factors for anosmia were: male gender, loss of smell history and poor olfactory self-perception for detection; low educational level, poor self-perception and pregnancy for recognition; and older age, poor self-perception and history of head trauma and loss of smell for identification. Smoking and exposure to noxious substances were mild protective factors for smell recognition. Conclusions Sense of smell in women is better than in men suggesting a learning process during life with deterioration in older ages. Poor self-perception, history of smell loss, head trauma and pregnancy are potential risk factors for olfactory disorders. PMID:23135536

  10. Required attention for synthesized speech perception for three levels of linguistic redundancy

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.; Hart, S. G.

    1977-01-01

    The study evaluates the attention required for synthesized speech perception with reference to three levels of linguistic redundancy. Twelve commercial airline pilots were individually tested for 16 cockpit warning messages eight of which consisted of two monosyllabic key words and eight of which consisted of two polysyllabic key words. Three levels of linguistic redundancy were identified: monosyllabic words, polysyllabic words, and sentences. The experiment contained a message familiarization phase and a message recognition phase. It was found that: (1) when the messages are part of a previously learned and recently heard set, and the subject is familiar with the phrasing, the attention needed to recognize the message is not a function of the level of linguistic redundancy, and (2) there is a quantitative and qualitative difference between recognition and comprehension processes; only in the case of active comprehension does additional redundancy reduce attention requirements.

  11. Physics career intentions: The effect of physics identity, math identity, and gender

    NASA Astrophysics Data System (ADS)

    Lock, Robynne M.; Hazari, Zahra; Potvin, Geoff

    2013-01-01

    Although nearly half of high school physics students are female, only 21% of physics bachelor's degrees are earned by women. Using data from a national survey of college students in introductory English courses (on science-related experiences, particularly in high school), we examine the influence of students' physics and math identities on their choice to pursue a physics career. Males have higher math and physics identities than females in all three dimensions of our identity framework. These dimensions include: performance/competence (perceptions of ability to perform/understand), recognition (perception of recognition by others), and interest (desire to learn more). A regression model predicting students' intentions to pursue physics careers shows, as expected, that males are significantly more likely to choose physics than females. Surprisingly, however, when physics and math identity are included in the model, females are shown to be equally likely to choose physics careers as compared to males.

  12. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  13. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Hopfield's Model of Patterns Recognition and Laws of Artistic Perception

    NASA Astrophysics Data System (ADS)

    Yevin, Igor; Koblyakov, Alexander

    The model of patterns recognition or attractor network model of associative memory, offered by J.Hopfield 1982, is the most known model in theoretical neuroscience. This paper aims to show, that such well-known laws of art perception as the Wundt curve, perception of visual ambiguity in art, and also the model perception of musical tonalities are nothing else than special cases of the Hopfield’s model of patterns recognition.

  15. Songbirds use spectral shape, not pitch, for sound pattern recognition

    PubMed Central

    Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.

    2016-01-01

    Humans easily recognize “transposed” musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition. PMID:26811447

  16. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. [The role of temporal fine structure in tone recognition and music perception].

    PubMed

    Zhou, Q; Gu, X; Liu, B

    2017-11-07

    The sound signal can be decomposed into temporal envelope and temporal fine structure information. The temporal envelope information is crucial for speech perception in quiet environment, and the temporal fine structure information plays an important role in speech perception in noise, Mandarin tone recognition and music perception, especially the pitch and melody perception.

  18. Fathers Online: Learning About Fatherhood Through the Internet

    PubMed Central

    StGeorge, Jennifer M.; Fletcher, Richard J.

    2011-01-01

    In the transition to fatherhood, men face numerous challenges. Opportunities to learn new practices and gain support are limited, although the provisions of father-specific spaces such as fathers’ antenatal classes or “responsible fathering” programs are important advances. This article explores how men use the social space of a father-specific Internet chat room to learn about fathering. Messages to an Australian-hosted, father-specific chat room (for fathers of infants or young children) were examined, and three overlapping themes illustrated men’s perceptions of their transition to fatherhood. The themes concerned recognition of and response to a lack of social space, services, and support for new fathers. The implications for fathers’ perinatal education are discussed. PMID:22654464

  19. What happens to the motor theory of perception when the motor system is damaged?

    PubMed

    Stasenko, Alena; Garcea, Frank E; Mahon, Bradford Z

    2013-09-01

    Motor theories of perception posit that motor information is necessary for successful recognition of actions. Perhaps the most well known of this class of proposals is the motor theory of speech perception, which argues that speech recognition is fundamentally a process of identifying the articulatory gestures (i.e. motor representations) that were used to produce the speech signal. Here we review neuropsychological evidence from patients with damage to the motor system, in the context of motor theories of perception applied to both manual actions and speech. Motor theories of perception predict that patients with motor impairments will have impairments for action recognition. Contrary to that prediction, the available neuropsychological evidence indicates that recognition can be spared despite profound impairments to production. These data falsify strong forms of the motor theory of perception, and frame new questions about the dynamical interactions that govern how information is exchanged between input and output systems.

  20. What happens to the motor theory of perception when the motor system is damaged?

    PubMed Central

    Stasenko, Alena; Garcea, Frank E.; Mahon, Bradford Z.

    2016-01-01

    Motor theories of perception posit that motor information is necessary for successful recognition of actions. Perhaps the most well known of this class of proposals is the motor theory of speech perception, which argues that speech recognition is fundamentally a process of identifying the articulatory gestures (i.e. motor representations) that were used to produce the speech signal. Here we review neuropsychological evidence from patients with damage to the motor system, in the context of motor theories of perception applied to both manual actions and speech. Motor theories of perception predict that patients with motor impairments will have impairments for action recognition. Contrary to that prediction, the available neuropsychological evidence indicates that recognition can be spared despite profound impairments to production. These data falsify strong forms of the motor theory of perception, and frame new questions about the dynamical interactions that govern how information is exchanged between input and output systems. PMID:26823687

  1. Acquired self-control of insula cortex modulates emotion recognition and brain network connectivity in schizophrenia.

    PubMed

    Ruiz, Sergio; Lee, Sangkyun; Soekadar, Surjo R; Caria, Andrea; Veit, Ralf; Kircher, Tilo; Birbaumer, Niels; Sitaram, Ranganatha

    2013-01-01

    Real-time functional magnetic resonance imaging (rtfMRI) is a novel technique that has allowed subjects to achieve self-regulation of circumscribed brain regions. Despite its anticipated therapeutic benefits, there is no report on successful application of this technique in psychiatric populations. The objectives of the present study were to train schizophrenia patients to achieve volitional control of bilateral anterior insula cortex on multiple days, and to explore the effect of learned self-regulation on face emotion recognition (an extensively studied deficit in schizophrenia) and on brain network connectivity. Nine patients with schizophrenia were trained to regulate the hemodynamic response in bilateral anterior insula with contingent rtfMRI neurofeedback, through a 2-weeks training. At the end of the training stage, patients performed a face emotion recognition task to explore behavioral effects of learned self-regulation. A learning effect in self-regulation was found for bilateral anterior insula, which persisted through the training. Following successful self-regulation, patients recognized disgust faces more accurately and happy faces less accurately. Improvements in disgust recognition were correlated with levels of self-activation of right insula. RtfMRI training led to an increase in the number of the incoming and outgoing effective connections of the anterior insula. This study shows for the first time that patients with schizophrenia can learn volitional brain regulation by rtfMRI feedback training leading to changes in the perception of emotions and modulations of the brain network connectivity. These findings open the door for further studies of rtfMRI in severely ill psychiatric populations, and possible therapeutic applications. Copyright © 2011 Wiley Periodicals, Inc.

  2. On the Relationship between Memory and Perception: Sequential Dependencies in Recognition Memory Testing

    ERIC Educational Resources Information Center

    Malmberg, Kenneth J.; Annis, Jeffrey

    2012-01-01

    Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…

  3. Bayesian Action–Perception Computational Model: Interaction of Production and Recognition of Cursive Letters

    PubMed Central

    Gilet, Estelle; Diard, Julien; Bessière, Pierre

    2011-01-01

    In this paper, we study the collaboration of perception and action representations involved in cursive letter recognition and production. We propose a mathematical formulation for the whole perception–action loop, based on probabilistic modeling and Bayesian inference, which we call the Bayesian Action–Perception (BAP) model. Being a model of both perception and action processes, the purpose of this model is to study the interaction of these processes. More precisely, the model includes a feedback loop from motor production, which implements an internal simulation of movement. Motor knowledge can therefore be involved during perception tasks. In this paper, we formally define the BAP model and show how it solves the following six varied cognitive tasks using Bayesian inference: i) letter recognition (purely sensory), ii) writer recognition, iii) letter production (with different effectors), iv) copying of trajectories, v) copying of letters, and vi) letter recognition (with internal simulation of movements). We present computer simulations of each of these cognitive tasks, and discuss experimental predictions and theoretical developments. PMID:21674043

  4. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. Task Versus Component Consistency in the Development of Automatic Processes: Consistent Attending Versus Consistent Responding.

    DTIC Science & Technology

    1982-03-01

    are two qualitatively different forms of human information processing (James, 1890; Hasher & Zacks, 1979; LaBerge , 1973, 1975; Logan, 1978, 1979...Kristofferson, M. W. When item recognition and visual search functions are similar. Perception & Psychophysics, 1972, 12, 379-384. LaBerge , D. Attention and...the measurement of perceptual learning. Hemory and3 Conition, 1973, 1, 263-276. LaBerge , D. Acquisition of automatic processing in purceptual and

  6. Use of rhythm in acquisition of a computer-generated tracking task.

    PubMed

    Fulop, A C; Kirby, R H; Coates, G D

    1992-08-01

    This research assessed whether rhythm aids acquisition of motor skills by providing cues for the timing of those skills. Rhythms were presented to participants visually or visually with auditory cues. It was hypothesized that the auditory cues would facilitate recognition and learning of the rhythms. The three timing principles of rhythms were also explored. It was hypothesized that rhythms that satisfied all three timing principles would be more beneficial in learning a skill than rhythms that did not satisfy the principles. Three groups learned three different rhythms by practicing a tracking task. After training, participants attempted to reproduce the tracks from memory. Results suggest that rhythms do help in learning motor skills but different sets of timing principles explain perception of rhythm in different modalities.

  7. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  9. Effects of Instructor Attractiveness on Learning.

    PubMed

    Westfall, Richard; Millar, Murray; Walsh, Mandy

    2016-01-01

    Although a considerable body of research has examined the impact of student attractiveness on instructors, little attention has been given to the influence of instructor attractiveness on students. This study tested the hypothesis that persons would perform significantly better on a learning task when they perceived their instructor to be high in physical attractiveness. To test the hypothesis, participants listened to an audio lecture while viewing a photograph of instructor. The photograph depicted either a physically attractive instructor or a less attractive instructor. Following the lecture, participants completed a forced choice recognition task covering material from the lecture. Consistent with the predictions; attractive instructors were associated with more learning. Finally, we replicated previous findings demonstrating the role attractiveness plays in person perception.

  10. Erring and learning in clinical practice.

    PubMed Central

    Hurwitz, Brian

    2002-01-01

    This paper discusses error type their possible consequences and the doctors who make them. There is no single, all-encompassing typology of medical errors. They are frequently multifactorial in origin and use from the mental processes of individuals; from defects in perception, thinking reasoning planning and interpretation and from failures of team-working omissions and poorly executed actions. They also arise from inadequately designed and operated healthcare systems or procedures. The paper considers error-truth relatedness, the approach of UK courts to medical errors, the learning opportunities which flow from error recognition and the need for personal and professional self awareness of clinical fallibilities. PMID:12389767

  11. Global facial beauty: approaching a unified aesthetic ideal.

    PubMed

    Sands, Noah B; Adamson, Peter A

    2014-04-01

    Recognition of facial beauty is both inborn and learned through social discourses and exposures. Demographic shifts across the globe, in addition to cross-cultural interactions that typify 21st century globalization in virtually all industries, comprise major active evolutionary forces that reshape our individual notions of facial beauty. This article highlights the changing perceptions of beauty, while defining and distinguishing natural beauty and artificial beauty. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. A "Situational" and "Coorientational" Measure of Specialized Magazine Editors' Perceptions of Readers.

    ERIC Educational Resources Information Center

    Jeffers, Dennis W.

    A study was undertaken of specialized magazine editors' perceptions of audience characteristics as well as the perceived role of their publications. Specifically, the study examines the relationship between the editors' perceptions of reader problem recognition, level of involvement, constraint recognition, and possession of reference criteria and…

  13. The development of newborn object recognition in fast and slow visual worlds

    PubMed Central

    Wood, Justin N.; Wood, Samantha M. W.

    2016-01-01

    Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925

  14. The role of recognition and interest in physics identity development

    NASA Astrophysics Data System (ADS)

    Lock, Robynne

    2016-03-01

    While the number of students earning bachelor's degrees in physics has increased in recent years, this number has only recently surpassed the peak value of the 1960s. Additionally, the percentage of women earning bachelor's degrees in physics has stagnated for the past 10 years and may even be declining. We use a physics identity framework consisting of three dimensions to understand how students make their initial career decisions at the end of high school and the beginning of college. The three dimensions consist of recognition (perception that teachers, parents, and peers see the student as a ``physics person''), interest (desire to learn more about physics), and performance/competence (perception of abilities to complete physics related tasks and to understand physics). Using data from the Sustainability and Gender in Engineering survey administered to a nationally representative sample of college students, we built a regression model to determine which identity dimensions have the largest effect on physics career choice and a structural equation model to understand how the identity dimensions are related. Additionally, we used regression models to identify teaching strategies that predict each identity dimension.

  15. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing

    PubMed Central

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2011-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers’ exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. PMID:22192636

  16. Science Teachers' Perceptions of the Relationship Between Game Play and Inquiry Learning

    NASA Astrophysics Data System (ADS)

    Mezei, Jessica M.

    The implementation of inquiry learning in American science classrooms remains a challenge. Teachers' perceptions of inquiry learning are predicated on their past educational experiences, which means outdated methods of learning may influence teachers' instructional approaches. In order to enhance their understanding and ultimately their implementation of inquiry learning, teachers need new and more relevant models. This study takes a preliminary step exploring the potential of game play as a valuable experience for science teachers. It has been proposed that game play and inquiry experiences can embody constructivist processes of learning, however there has been little work done with science teachers to systematically explore the relationship between the two. Game play may be an effective new model for teacher education and it is important to understand if and how teachers relate game playing experience and knowledge to inquiry. This study examined science teachers' game playing experiences and their perceptions of inquiry experiences and evaluated teacher's recognition of learning in both contexts. Data was collected through an online survey (N=246) and a series of follow-up interviews (N=29). Research questions guiding the study were: (1) What is the nature of the relationship between science teachers' game experience and their perceptions of inquiry? (2) How do teachers describe learning in and from game playing as compared with inquiry science learning? and (3) What is the range of similarities and differences teachers articulate between game play and inquiry experiences?. Results showed weak quantitative links between science teachers' game experiences and their perceptions of inquiry, but identified promising game variables such as belief in games as learning tools, game experiences, and playing a diverse set of games for future study. The qualitative data suggests that teachers made broad linkages in terms of parallels of both teaching and learning. Teachers mostly articulated learning connections in terms of the active or participatory nature of the experiences. Additionally, a majority of teachers discussed inquiry learning in concert with inquiry teaching which led to a wider range of comparisons made based on the teacher's interpretation of inquiry as a pedagogical approach instead of focusing solely on inquiry learning. This study has implications for both research and practice. Results demonstrate that teachers are interested in game play as it relates to learning and the linkages teachers made between the domains suggests it may yet prove to be a fruitful analogical device that could be leveraged for teacher development. However, further study is needed to test these claims and ultimately, research that further aligns the benefits of game play experiences to teacher practice is encouraged in order to build on the propositions and findings of this thesis.

  17. Adaptation to nonlinear frequency compression in normal-hearing adults: a comparison of training approaches.

    PubMed

    Dickinson, Ann-Marie; Baker, Richard; Siciliano, Catherine; Munro, Kevin J

    2014-10-01

    To identify which training approach, if any, is most effective for improving perception of frequency-compressed speech. A between-subject design using repeated measures. Forty young adults with normal hearing were randomly allocated to one of four groups: a training group (sentence or consonant) or a control group (passive exposure or test-only). Test and training material differed in terms of material and speaker. On average, sentence training and passive exposure led to significantly improved sentence recognition (11.0% and 11.7%, respectively) compared with the consonant training group (2.5%) and test-only group (0.4%), whilst, consonant training led to significantly improved consonant recognition (8.8%) compared with the sentence training group (1.9%), passive exposure group (2.8%), and test-only group (0.8%). Sentence training led to improved sentence recognition, whilst consonant training led to improved consonant recognition. This suggests learning transferred between speakers and material but not stimuli. Passive exposure to sentence material led to an improvement in sentence recognition that was equivalent to gains from active training. This suggests that it may be possible to adapt passively to frequency-compressed speech.

  18. Speech Perception, Word Recognition and the Structure of the Lexicon. Research on Speech Perception Progress Report No. 10.

    ERIC Educational Resources Information Center

    Pisoni, David B.; And Others

    The results of three projects concerned with auditory word recognition and the structure of the lexicon are reported in this paper. The first project described was designed to test experimentally several specific predictions derived from MACS, a simulation model of the Cohort Theory of word recognition. The second project description provides the…

  19. Development of visuo-haptic transfer for object recognition in typical preschool and school-aged children.

    PubMed

    Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca

    2018-07-01

    Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.

  20. Speech perception and reading: two parallel modes of understanding language and implications for acquiring literacy naturally.

    PubMed

    Massaro, Dominic W

    2012-01-01

    I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally.

  1. Vibrotactile feedback for conveying object shape information as perceived by artificial sensing of robotic arm.

    PubMed

    Khasnobish, Anwesha; Pal, Monalisa; Sardar, Dwaipayan; Tibarewala, D N; Konar, Amit

    2016-08-01

    This work is a preliminary study towards developing an alternative communication channel for conveying shape information to aid in recognition of items when tactile perception is hindered. Tactile data, acquired during object exploration by sensor fitted robot arm, are processed to recognize four basic geometric shapes. Patterns representing each shape, classified from tactile data, are generated using micro-controller-driven vibration motors which vibrotactually stimulate users to convey the particular shape information. These motors are attached on the subject's arm and their psychological (verbal) responses are recorded to assess the competence of the system to convey shape information to the user in form of vibrotactile stimulations. Object shapes are classified from tactile data with an average accuracy of 95.21 %. Three successive sessions of shape recognition from vibrotactile pattern depicted learning of the stimulus from subjects' psychological response which increased from 75 to 95 %. This observation substantiates the learning of vibrotactile stimulation in user over the sessions which in turn increase the system efficacy. The tactile sensing module and vibrotactile pattern generating module are integrated to complete the system whose operation is analysed in real-time. Thus, the work demonstrates a successful implementation of the complete schema of artificial tactile sensing system for object-shape recognition through vibrotactile stimulations.

  2. Development and validation of the University of Washington Clinical Assessment of Music Perception test.

    PubMed

    Kang, Robert; Nimmons, Grace Liu; Drennan, Ward; Longnion, Jeff; Ruffin, Chad; Nie, Kaibao; Won, Jong Ho; Worman, Tina; Yueh, Bevan; Rubinstein, Jay

    2009-08-01

    Assessment of cochlear implant outcomes centers around speech discrimination. Despite dramatic improvements in speech perception, music perception remains a challenge for most cochlear implant users. No standardized test exists to quantify music perception in a clinically practical manner. This study presents the University of Washington Clinical Assessment of Music Perception (CAMP) test as a reliable and valid music perception test for English-speaking, adult cochlear implant users. Forty-two cochlear implant subjects were recruited from the University of Washington Medical Center cochlear implant program and referred by two implant manufacturers. Ten normal-hearing volunteers were drawn from the University of Washington Medical Center and associated campuses. A computer-driven, self-administered test was developed to examine three specific aspects of music perception: pitch direction discrimination, melody recognition, and timbre recognition. The pitch subtest used an adaptive procedure to determine just-noticeable differences for complex tone pitch direction discrimination within the range of 1 to 12 semitones. The melody and timbre subtests assessed recognition of 12 commonly known melodies played with complex tones in an isochronous manner and eight musical instruments playing an identical five-note sequence, respectively. Testing was repeated for cochlear implant subjects to evaluate test-retest reliability. Normal-hearing volunteers were also tested to demonstrate differences in performance in the two populations. For cochlear implant subjects, pitch direction discrimination just-noticeable differences ranged from 1 to 8.0 semitones (Mean = 3.0, SD = 2.3). Melody and timbre recognition ranged from 0 to 94.4% correct (mean = 25.1, SD = 22.2) and 20.8 to 87.5% (mean = 45.3, SD = 16.2), respectively. Each subtest significantly correlated at least moderately with both Consonant-Nucleus-Consonant (CNC) word recognition scores and spondee recognition thresholds in steady state noise and two-talker babble. Intraclass coefficients demonstrating test-retest correlations for pitch, melody, and timbre were 0.85, 0.92, and 0.69, respectively. Normal-hearing volunteers had a mean pitch direction discrimination threshold of 1.0 semitone, the smallest interval tested, and mean melody and timbre recognition scores of 87.5 and 94.2%, respectively. The CAMP test discriminates a wide range of music perceptual ability in cochlear implant users. Moderate correlations were seen between music test results and both Consonant-Nucleus-Consonant word recognition scores and spondee recognition thresholds in background noise. Test-retest reliability was moderate to strong. The CAMP test provides a reliable and valid metric for a clinically practical, standardized evaluation of music perception in adult cochlear implant users.

  3. Contribution of hearing aids to music perception by cochlear implant users.

    PubMed

    Peterson, Nathaniel; Bergeson, Tonya R

    2015-09-01

    Modern cochlear implant (CI) encoding strategies represent the temporal envelope of sounds well but provide limited spectral information. This deficit in spectral information has been implicated as a contributing factor to difficulty with speech perception in noisy conditions, discriminating between talkers and melody recognition. One way to supplement spectral information for CI users is by fitting a hearing aid (HA) to the non-implanted ear. In this study 14 postlingually deaf adults (half with a unilateral CI and the other half with a CI and an HA (CI + HA)) were tested on measures of music perception and familiar melody recognition. CI + HA listeners performed significantly better than CI-only listeners on all pitch-based music perception tasks. The CI + HA group did not perform significantly better than the CI-only group in the two tasks that relied on duration cues. Recognition of familiar melodies was significantly enhanced for the group wearing an HA in addition to their CI. This advantage in melody recognition was increased when melodic sequences were presented with the addition of harmony. These results show that, for CI recipients with aidable hearing in the non-implanted ear, using a HA in addition to their implant improves perception of musical pitch and recognition of real-world melodies.

  4. Developmental prosopagnosia and super-recognition: no special role for surface reflectance processing.

    PubMed

    Russell, Richard; Chatterjee, Garga; Nakayama, Ken

    2012-01-01

    Face recognition by normal subjects depends in roughly equal proportions on shape and surface reflectance cues, while object recognition depends predominantly on shape cues. It is possible that developmental prosopagnosics are deficient not in their ability to recognize faces per se, but rather in their ability to use reflectance cues. Similarly, super-recognizers' exceptional ability with face recognition may be a result of superior surface reflectance perception and memory. We tested this possibility by administering tests of face perception and face recognition in which only shape or reflectance cues are available to developmental prosopagnosics, super-recognizers, and control subjects. Face recognition ability and the relative use of shape and pigmentation were unrelated in all the tests. Subjects who were better at using shape or reflectance cues were also better at using the other type of cue. These results do not support the proposal that variation in surface reflectance perception ability is the underlying cause of variation in face recognition ability. Instead, these findings support the idea that face recognition ability is related to neural circuits using representations that integrate shape and pigmentation information. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Bayesian Action-Perception loop modeling: Application to trajectory generation and recognition using internal motor simulation

    NASA Astrophysics Data System (ADS)

    Gilet, Estelle; Diard, Julien; Palluel-Germain, Richard; Bessière, Pierre

    2011-03-01

    This paper is about modeling perception-action loops and, more precisely, the study of the influence of motor knowledge during perception tasks. We use the Bayesian Action-Perception (BAP) model, which deals with the sensorimotor loop involved in reading and writing cursive isolated letters and includes an internal simulation of movement loop. By using this probabilistic model we simulate letter recognition, both with and without internal motor simulation. Comparison of their performance yields an experimental prediction, which we set forth.

  6. Learning during processing Word learning doesn’t wait for word recognition to finish

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  7. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Speech perception benefits of FM and infrared devices to children with hearing aids in a typical classroom.

    PubMed

    Anderson, Karen L; Goldstein, Howard

    2004-04-01

    Children typically learn in classroom environments that have background noise and reverberation that interfere with accurate speech perception. Amplification technology can enhance the speech perception of students who are hard of hearing. This study used a single-subject alternating treatments design to compare the speech recognition abilities of children who are, hard of hearing when they were using hearing aids with each of three frequency modulated (FM) or infrared devices. Eight 9-12-year-olds with mild to severe hearing loss repeated Hearing in Noise Test (HINT) sentence lists under controlled conditions in a typical kindergarten classroom with a background noise level of +10 dB signal-to-noise (S/N) ratio and 1.1 s reverberation time. Participants listened to HINT lists using hearing aids alone and hearing aids in combination with three types of S/N-enhancing devices that are currently used in mainstream classrooms: (a) FM systems linked to personal hearing aids, (b) infrared sound field systems with speakers placed throughout the classroom, and (c) desktop personal sound field FM systems. The infrared ceiling sound field system did not provide benefit beyond that provided by hearing aids alone. Desktop and personal FM systems in combination with personal hearing aids provided substantial improvements in speech recognition. This information can assist in making S/N-enhancing device decisions for students using hearing aids. In a reverberant and noisy classroom setting, classroom sound field devices are not beneficial to speech perception for students with hearing aids, whereas either personal FM or desktop sound field systems provide listening benefits.

  9. Music Training Can Improve Music and Speech Perception in Pediatric Mandarin-Speaking Cochlear Implant Users.

    PubMed

    Cheng, Xiaoting; Liu, Yangwenyi; Shu, Yilai; Tao, Duo-Duo; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2018-01-01

    Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively ( p < .05 in all cases). Four weeks after training was stopped, there was no significant change in posttraining music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users' music and speech perception.

  10. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    PubMed Central

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  11. Auditory perception vs. recognition: representation of complex communication sounds in the mouse auditory cortical fields.

    PubMed

    Geissler, Diana B; Ehret, Günter

    2004-02-01

    Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.

  12. Multivariate predictors of music perception and appraisal by adult cochlear implant users.

    PubMed

    Gfeller, Kate; Oleson, Jacob; Knutson, John F; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol

    2008-02-01

    The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music.

  13. Constraints on the Transfer of Perceptual Learning in Accented Speech

    PubMed Central

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  14. Audiovisual semantic congruency during encoding enhances memory performance.

    PubMed

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  15. An adaptive deep Q-learning strategy for handwritten digit recognition.

    PubMed

    Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min

    2018-02-22

    Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Theory of Mind, Emotion Recognition and Social Perception in Individuals at Clinical High Risk for Psychosis: findings from the NAPLS-2 cohort.

    PubMed

    Barbato, Mariapaola; Liu, Lu; Cadenhead, Kristin S; Cannon, Tyrone D; Cornblatt, Barbara A; McGlashan, Thomas H; Perkins, Diana O; Seidman, Larry J; Tsuang, Ming T; Walker, Elaine F; Woods, Scott W; Bearden, Carrie E; Mathalon, Daniel H; Heinssen, Robert; Addington, Jean

    2015-09-01

    Social cognition, the mental operations that underlie social interactions, is a major construct to investigate in schizophrenia. Impairments in social cognition are present before the onset of psychosis, and even in unaffected first-degree relatives, suggesting that social cognition may be a trait marker of the illness. In a large cohort of individuals at clinical high risk for psychosis (CHR) and healthy controls, three domains of social cognition (theory of mind, facial emotion recognition and social perception) were assessed to clarify which domains are impaired in this population. Six-hundred and seventy-five CHR individuals and 264 controls, who were part of the multi-site North American Prodromal Longitudinal Study, completed The Awareness of Social Inference Test , the Penn Emotion Recognition task , the Penn Emotion Differentiation task , and the Relationship Across Domains , measures of theory of mind, facial emotion recognition, and social perception, respectively. Social cognition was not related to positive and negative symptom severity, but was associated with age and IQ. CHR individuals demonstrated poorer performance on all measures of social cognition. However, after controlling for age and IQ, the group differences remained significant for measures of theory of mind and social perception, but not for facial emotion recognition. Theory of mind and social perception are impaired in individuals at CHR for psychosis. Age and IQ seem to play an important role in the arising of deficits in facial affect recognition. Future studies should examine the stability of social cognition deficits over time and their role, if any, in the development of psychosis.

  17. Workplace learning and career progression: qualitative perspectives of UK dietitians.

    PubMed

    Boocock, R C; O'Rourke, R K

    2018-06-10

    Post-graduate education and continuous professional development (CPD) within dietetics lack clearly defined pathways. The current literature primarily focuses on new graduate perceptions of workplace learning (WPL). The present study raises issues of how CPD is sustained throughout a National Health Service (NHS) career, how informal learning might be made more visible and whether the workplace withholds learning opportunities. Qualified dietitians participated in focus groups (n = 32) and a nominal group technique (n = 24). Data from audio recordings were transcribed and triangulated. Thematic analysis took an interpretative approach. One size for WPL for dietetics and, likely, other allied health professionals (AHPs) did not meet the learning needs of everyone. The informal implicit learning affordances often went unrecognised. A greater emphasis on teaching, picking up on the strong preference for discussion with others voiced in the present study, may improve recognition of all WPL opportunities. Better scaffolding or guided support of entry level dietitians may ease the transition from study to workplace and challenge any perception of 'clipped wings'. Where development and career progression proves difficult for experienced dietitians, mentoring or stepping outside the NHS may revitalise by providing new communities of practice. WPL cannot be understood as a unitary concept. Dietitians engage with WPL differently across their careers. Future visions of WPL, especially explicit post-graduate career and education frameworks, must accommodate these differences to retain the highest calibre dietitians. The implications of a period of learning 'maintenance' rather than CPD among experienced dietitians offers a topic for further research, particularly as the workforce ages. © 2018 The British Dietetic Association Ltd.

  18. Pattern Perception and Pictures for the Blind

    ERIC Educational Resources Information Center

    Heller, Morton A.; McCarthy, Melissa; Clark, Ashley

    2005-01-01

    This article reviews recent research on perception of tangible pictures in sighted and blind people. Haptic picture naming accuracy is dependent upon familiarity and access to semantic memory, just as in visual recognition. Performance is high when haptic picture recognition tasks do not depend upon semantic memory. Viewpoint matters for the ease…

  19. Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel

    PubMed Central

    Kleinschmidt, Dave F.; Jaeger, T. Florian

    2016-01-01

    Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively non-stationary world and propose that the speech perception system overcomes this challenge by (1) recognizing previously encountered situations, (2) generalizing to other situations based on previous similar experience, and (3) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (1) to (3) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on two critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these two aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension. PMID:25844873

  20. Creating objects and object categories for studying perception and perceptual learning.

    PubMed

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-11-02

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.

  1. A model of attention-guided visual perception and recognition.

    PubMed

    Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A

    1998-08-01

    A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.

  2. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    PubMed

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  3. Discrepancies between parent and adolescent beliefs about daily life topics and performance on an emotion recognition task.

    PubMed

    De Los Reyes, Andres; Lerner, Matthew D; Thomas, Sarah A; Daruwala, Samantha; Goepel, Katherine

    2013-08-01

    Parents and children and adolescents commonly disagree in their perceptions of a variety of behaviors, including the family relationship and environment, and child and adolescent psychopathology. To this end, numerous studies have examined to what extent increased discrepant perceptions-particularly with regard to perceptions of the family relationship and environment-predict increased child and adolescent psychopathology. Parents' and children and adolescents' abilities to decode and identify others' emotions (i.e., emotion recognition) may play a role in the link between discrepant perceptions and child and adolescent psychopathology. We examined parents' and adolescents' emotion recognition abilities in relation to discrepancies between parent and adolescent perceptions of daily life topics. In a sample of 50 parents and adolescents ages 14-to-17 years (M = 15.4 years, 20 males, 54 % African-American), parents and adolescents were each administered a widely used performance-based measure of emotion recognition. Parents and adolescents were also administered a structured interview designed to directly assess each of their perceptions of the extent to which discrepancies existed in their beliefs about daily life topics (e.g., whether adolescents should complete their homework and carry out household chores). Interestingly, lower parent and adolescent emotion recognition performance significantly related to greater parent and adolescent perceived discrepant beliefs about daily life topics. We observed this relation whilst accounting for adolescent age and gender and levels of parent-adolescent conflict. These findings have important implications for understanding and using informant discrepancies in both basic developmental psychopathology research and applied research in clinic settings (e.g., discrepant views on therapeutic goals).

  4. Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.

    PubMed

    Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J

    1997-01-01

    A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.

  5. A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system.

    PubMed

    Kaplan, Bernhard A; Lansner, Anders

    2014-01-01

    Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin-Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian-Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian-Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures.

  6. A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system

    PubMed Central

    Kaplan, Bernhard A.; Lansner, Anders

    2014-01-01

    Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin–Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian–Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian–Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures. PMID:24570657

  7. Infant word recognition: Insights from TRACE simulations☆

    PubMed Central

    Mayor, Julien; Plunkett, Kim

    2014-01-01

    The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants’ graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan’s stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life. PMID:24493907

  8. Infant word recognition: Insights from TRACE simulations.

    PubMed

    Mayor, Julien; Plunkett, Kim

    2014-02-01

    The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants' graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan's stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life.

  9. Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments

    PubMed Central

    Goswami, Usha; Cumming, Ruth; Chait, Maria; Huss, Martina; Mead, Natasha; Wilson, Angela M.; Barnes, Lisa; Fosker, Tim

    2016-01-01

    Here we use two filtered speech tasks to investigate children’s processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 – 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed. PMID:27303348

  10. Deep kernel learning method for SAR image target recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  11. Voice Recognition in Face-Blind Patients

    PubMed Central

    Liu, Ran R.; Pancaroglu, Raika; Hills, Charlotte S.; Duchaine, Brad; Barton, Jason J. S.

    2016-01-01

    Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia. PMID:25349193

  12. Noise levels in an urban Asian school environment

    PubMed Central

    Chan, Karen M.K.; Li, Chi Mei; Ma, Estella P.M.; Yiu, Edwin M.L.; McPherson, Bradley

    2015-01-01

    Background noise is known to adversely affect speech perception and speech recognition. High levels of background noise in school classrooms may affect student learning, especially for those pupils who are learning in a second language. The current study aimed to determine the noise level and teacher speech-to-noise ratio (SNR) in Hong Kong classrooms. Noise level was measured in 146 occupied classrooms in 37 schools, including kindergartens, primary schools, secondary schools and special schools, in Hong Kong. The mean noise levels in occupied kindergarten, primary school, secondary school and special school classrooms all exceeded recommended maximum noise levels, and noise reduction measures were seldom used in classrooms. The measured SNRs were not optimal and could have adverse implications for student learning and teachers’ vocal health. Schools in urban Asian environments are advised to consider noise reduction measures in classrooms to better comply with recommended maximum noise levels for classrooms. PMID:25599758

  13. Noise levels in an urban Asian school environment.

    PubMed

    Chan, Karen M K; Li, Chi Mei; Ma, Estella P M; Yiu, Edwin M L; McPherson, Bradley

    2015-01-01

    Background noise is known to adversely affect speech perception and speech recognition. High levels of background noise in school classrooms may affect student learning, especially for those pupils who are learning in a second language. The current study aimed to determine the noise level and teacher speech-to-noise ratio (SNR) in Hong Kong classrooms. Noise level was measured in 146 occupied classrooms in 37 schools, including kindergartens, primary schools, secondary schools and special schools, in Hong Kong. The mean noise levels in occupied kindergarten, primary school, secondary school and special school classrooms all exceeded recommended maximum noise levels, and noise reduction measures were seldom used in classrooms. The measured SNRs were not optimal and could have adverse implications for student learning and teachers' vocal health. Schools in urban Asian environments are advised to consider noise reduction measures in classrooms to better comply with recommended maximum noise levels for classrooms.

  14. Peer-to-Peer Recognition of Learning in Open Education

    ERIC Educational Resources Information Center

    Schmidt, Jan Philipp; Geith, Christine; Haklev, Stian; Thierstein, Joel

    2009-01-01

    Recognition in education is the acknowledgment of learning achievements. Accreditation is certification of such recognition by an institution, an organization, a government, a community, etc. There are a number of assessment methods by which learning can be evaluated (exam, practicum, etc.) for the purpose of recognition and accreditation, and…

  15. Teachers' Perceptions of Digital Badges as Recognition of Professional Development

    ERIC Educational Resources Information Center

    Jones, W. Monty; Hope, Samantha; Adams, Brianne

    2018-01-01

    This mixed methods study examined teachers' perceptions and uses of digital badges received as recognition of participation in a professional development program. Quantitative and qualitative survey data was collected from 99 K-12 teachers who were awarded digital badges in Spring 2016. In addition, qualitative data was collected through…

  16. Super-recognizers: People with extraordinary face recognition ability

    PubMed Central

    Russell, Richard; Duchaine, Brad; Nakayama, Ken

    2014-01-01

    We tested four people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all four experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than inverted faces, and the four subjects showed a larger ‘inversion effect’ than control subjects, who in turn showed a larger inversion effect than developmental prosopagnosics. This indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these ‘super-recognizers’ are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability, and show that the range of face recognition and face perception ability is wider than previously acknowledged. PMID:19293090

  17. Super-recognizers: people with extraordinary face recognition ability.

    PubMed

    Russell, Richard; Duchaine, Brad; Nakayama, Ken

    2009-04-01

    We tested 4 people who claimed to have significantly better than ordinary face recognition ability. Exceptional ability was confirmed in each case. On two very different tests of face recognition, all 4 experimental subjects performed beyond the range of control subject performance. They also scored significantly better than average on a perceptual discrimination test with faces. This effect was larger with upright than with inverted faces, and the 4 subjects showed a larger "inversion effect" than did control subjects, who in turn showed a larger inversion effect than did developmental prosopagnosics. This result indicates an association between face recognition ability and the magnitude of the inversion effect. Overall, these "super-recognizers" are about as good at face recognition and perception as developmental prosopagnosics are bad. Our findings demonstrate the existence of people with exceptionally good face recognition ability and show that the range of face recognition and face perception ability is wider than has been previously acknowledged.

  18. The effects of digital signal processing features on children's speech recognition and loudness perception.

    PubMed

    Crukley, Jeffery; Scollie, Susan D

    2014-03-01

    The purpose of this study was to determine the effects of hearing instruments set to Desired Sensation Level version 5 (DSL v5) hearing instrument prescription algorithm targets and equipped with directional microphones and digital noise reduction (DNR) on children's sentence recognition in noise performance and loudness perception in a classroom environment. Ten children (ages 8-17 years) with stable, congenital sensorineural hearing losses participated in the study. Participants were fitted bilaterally with behind-the-ear hearing instruments set to DSL v5 prescriptive targets. Sentence recognition in noise was evaluated using the Bamford-Kowal-Bench Speech in Noise Test (Niquette et al., 2003). Loudness perception was evaluated using a modified version of the Contour Test of Loudness Perception (Cox, Alexander, Taylor, & Gray, 1997). Children's sentence recognition in noise performance was significantly better when using directional microphones alone or in combination with DNR than when using omnidirectional microphones alone or in combination with DNR. Children's loudness ratings for sounds above 72 dB SPL were lowest when fitted with the DSL v5 Noise prescription combined with directional microphones. DNR use showed no effect on loudness ratings. Use of the DSL v5 Noise prescription with a directional microphone improved sentence recognition in noise performance and reduced loudness perception ratings for loud sounds relative to a typical clinical reference fitting with the DSL v5 Quiet prescription with no digital signal processing features enabled. Potential clinical strategies are discussed.

  19. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  20. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    PubMed

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  1. Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users

    PubMed Central

    Gfeller, Kate; Oleson, Jacob; Knutson, John F.; Breheny, Patrick; Driscoll, Virginia; Olszewski, Carol

    2009-01-01

    The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. PMID:18669126

  2. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    PubMed

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  3. Nursing perceptions of patient safety climate in the Gaza Strip, Palestine.

    PubMed

    Elsous, A; Akbari Sari, A; AlJeesh, Y; Radwan, M

    2017-09-01

    This study was undertaken to assess the perception of nurses about patient safety culture and to test whether it is significantly affected by the nurses' position, age, experience and working hours. Patient safety has sparked the interest of healthcare mangers, yet there is limited knowledge about the current patient safety culture among nurses in the Gaza Strip. This was a descriptive cross-sectional study, administering the Arabic Safety Attitude Questionnaire (Short Form 2006) to 210 nurses in four public general hospitals. Job Satisfaction was the most highly perceived factor affecting patient safety, followed by Perception of Management. Safety culture varied across nursing position, age, work experience and working hours. Nurse Managers had more positive attitudes towards patients than frontline clinicians did. The more experience nurses had, the better their attitudes towards patient safety. Nurses who worked the minimum weekly required hours and who were 35 years and older had better attitudes towards all patient safety dimensions except for Stress Recognition. Nurses with a positive attitude had better collaboration with healthcare professionals than those without a positive attitude. Generalization is limited, as nurses who worked in private and specialized hospitals were excluded. Evaluation of the safety culture is the essential starting point to identify hindrances or drivers for safe patient care. Job Satisfaction, Perception of Management and Teamwork necessitate reinforcement, while Working Conditions, Stress Recognition and Safety Climate require improvement. Ensuring job satisfaction through adequate staffing levels, providing incentives and maintaining a collegial environment require both strategic planning and institutional policies at the higher administrative level. Creation of a non-punitive and learning environment, promoting open communication and fostering continuous education should be fundamental aspects of hospital management. A policy of mixing experienced nurses with inexperienced nurses should be considered. © 2017 International Council of Nurses.

  4. Minimal effects of visual memory training on the auditory performance of adult cochlear implant users

    PubMed Central

    Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie

    2014-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087

  5. A self-organized learning strategy for object recognition by an embedded line of attraction

    NASA Astrophysics Data System (ADS)

    Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.

    2012-04-01

    For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.

  6. Evidence for view-invariant face recognition units in unfamiliar face learning.

    PubMed

    Etchells, David B; Brooks, Joseph L; Johnston, Robert A

    2017-05-01

    Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.

  7. How adolescents learn about risk perception and behavior in regards to alcohol use in light of social learning theory: a qualitative study in Bogotá, Colombia.

    PubMed

    Trujillo, Elena María; Suárez, Daniel Enrique; Lema, Mariana; Londoño, Alicia

    2015-02-01

    In Colombia, the use of alcohol is one of the main risky behaviors carried out by adolescents, given that alcohol is the principal drug of abuse in this age group. Understanding how adolescents learn about risk and behavior is important in developing effective prevention programs. The Theory of Social learning underlines the importance of social interaction in the learning process. It suggests that learning can occur in three ways: a live model in which a person is enacting the desired behavior, verbal instruction when the desired behavior is described, and symbolic learning in which modeling occurs by influence of the media. This study explores these three forms of learning in the perception of risk and behavior related to the use of alcohol in a group of students between 12 and 14 years of age in Bogotá, Colombia. This is a qualitative research study, which is part of a larger study exploring the social representations of risk and alcohol use in adolescents and their communities. The sample group included 160 students from two middle schools (7th and 8th graders) in Bogotá, Colombia. Six sessions of participant observation, 12 semi-structured interviews, and 12 focus group discussions were conducted for data collection. Data were analyzed using the Atlas ti software (V7.0) (ATLAS.ti Scientific Software Development GmbH, London, UK), and categories of analysis were developed using a framework analysis approach. Adolescents can identify several risks related to the use of alcohol, which for the most part, appear to have been learned through verbal instruction. However, this risk recognition does not appear to correlate with their behavior. Parental modeling and messages conveyed by the media represent two other significant sources of learning that are constantly contradicting the messages relayed through verbal instruction and correlate to a greater extent with adolescent behavior. The three different forms of learning described by Social Learning Theory play a significant role in the construction of risk perception and behavior in adolescents. This underlines the necessity of consciously evaluating how examples set by adults as well as the ideas expressed by the media influence adolescents' attitudes and behavior, ensuring that these do not directly contradict and ultimately obliterate the messages we are constantly trying to convey to this age group.

  8. Creating a meaningful visual perception in blind volunteers by optic nerve stimulation

    NASA Astrophysics Data System (ADS)

    Brelén, M. E.; Duret, F.; Gérard, B.; Delbeke, J.; Veraart, C.

    2005-03-01

    A blind volunteer, suffering from retinitis pigmentosa, has been chronically implanted with an optic nerve visual prosthesis. Vision rehabilitation with this volunteer has concentrated on the development of a stimulation strategy according to which video camera images are converted into stimulation pulses. The aim is to convey as much information as possible about the visual scene within the limits of the device's capabilities. Pattern recognition tasks were used to assess the effectiveness of the stimulation strategy. The results demonstrate how even a relatively basic algorithm can efficiently convey useful information regarding the visual scene. By increasing the number of phosphenes used in the algorithm, better performance is observed but a longer training period is required. After a learning period, the volunteer achieved a pattern recognition score of 85% at 54 s on average per pattern. After nine evaluation sessions, when using a stimulation strategy exploiting all available phosphenes, no saturation effect has yet been observed.

  9. Cross-domain expression recognition based on sparse coding and transfer learning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  10. Testing the Recognition and Perception of Errors in Context

    ERIC Educational Resources Information Center

    Brandenburg, Laura C.

    2015-01-01

    This study tests the recognition of errors in context and whether the presence of errors affects the reader's perception of the writer's ethos. In an experimental, posttest only design, participants were randomly assigned a memo to read in an online survey: one version with errors and one version without. Of the six intentional errors in version…

  11. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  12. Temporal abstraction and inductive logic programming for arrhythmia recognition from electrocardiograms.

    PubMed

    Carrault, G; Cordier, M-O; Quiniou, R; Wang, F

    2003-07-01

    This paper proposes a novel approach to cardiac arrhythmia recognition from electrocardiograms (ECGs). ECGs record the electrical activity of the heart and are used to diagnose many heart disorders. The numerical ECG is first temporally abstracted into series of time-stamped events. Temporal abstraction makes use of artificial neural networks to extract interesting waves and their features from the input signals. A temporal reasoner called a chronicle recogniser processes such series in order to discover temporal patterns called chronicles which can be related to cardiac arrhythmias. Generally, it is difficult to elicit an accurate set of chronicles from a doctor. Thus, we propose to learn automatically from symbolic ECG examples the chronicles discriminating the arrhythmias belonging to some specific subset. Since temporal relationships are of major importance, inductive logic programming (ILP) is the tool of choice as it enables first-order relational learning. The approach has been evaluated on real ECGs taken from the MIT-BIH database. The performance of the different modules as well as the efficiency of the whole system is presented. The results are rather good and demonstrate that integrating numerical techniques for low level perception and symbolic techniques for high level classification is very valuable.

  13. Building Knowledge through Portfolio Learning in Prior Learning Assessment and Recognition

    ERIC Educational Resources Information Center

    Conrad, Dianne

    2008-01-01

    It is important for academic credibility that the process of prior learning assessment and recognition (PLAR) keeps learning and knowledge as its foundational tenets. Doing so ensures PLAR's recognition as a fertile ground for learners' cognitive and personal growth. In many postsecondary venues, PLAR is often misunderstood and confused with…

  14. Making Learning Visible: Identification, Assessment and Recognition of Non-Formal Learning in Europe.

    ERIC Educational Resources Information Center

    Bjornavold, Jens

    Policies and practices in the areas of identification, assessment, and recognition of nonformal learning in the European Union (EU) were reviewed. The review focused on national and EU-level experiences regarding the following areas and issues: recognition of the contextual nature of learning; identification of methodological requirements for…

  15. Naming and recognizing famous faces in temporal lobe epilepsy.

    PubMed

    Glosser, G; Salvucci, A E; Chiaravalloti, N D

    2003-07-08

    To assess naming and recognition of faces of familiar famous people in patients with epilepsy before and after anterior temporal lobectomy (ATL). Color photographs of famous people were presented for naming and description to 63 patients with temporal lobe epilepsy (TLE) either before or after ATL and to 10 healthy age- and education-matched controls. Spontaneous naming of photographed famous people was impaired in all patient groups, but was most abnormal in patients who had undergone left ATL. When allowed to demonstrate knowledge of the famous faces through verbal descriptions, rather than naming, patients with left TLE, left ATL, and right TLE improved to normal levels, but patients with right ATL were still impaired, suggesting a new deficit in identifying famous faces. Naming of famous people was related to naming of other common objects, verbal memory, and perceptual discrimination of faces. Recognition of the identity of pictured famous people was more related to visuospatial perception and memory. Lesions in anterior regions of the right temporal lobe impair recognition of the identities of familiar faces, as well as the learning of new faces. Lesions in the left temporal lobe, especially in anterior regions, disrupt access to the names of known people, but do not affect recognition of the identities of famous faces. Results are consistent with the hypothesized role of lateralized anterior temporal lobe structures in facial recognition and naming of unique entities.

  16. Sensing the intruder: a quantitative threshold for recognition cues perception in honeybees

    NASA Astrophysics Data System (ADS)

    Cappa, Federico; Bruschini, Claudia; Cipollini, Maria; Pieraccini, Giuseppe; Cervo, Rita

    2014-02-01

    The ability to discriminate among nestmates and non-nestmate is essential to defend social insect colonies from intruders. Over the years, nestmate recognition has been extensively studied in the honeybee Apis mellifera; nevertheless, the quantitative perceptual aspects at the basis of the recognition system represent an unexplored subject in this species. To test the existence of a cuticular hydrocarbons' quantitative perception threshold for nestmate recognition cues, we conducted behavioural assays by presenting different amounts of a foreign forager's chemical profile to honeybees at the entrance of their colonies. We found an increase in the explorative and aggressive responses as the amount of cues increased based on a threshold mechanism, highlighting the importance of the quantitative perceptual features for the recognition processes in A. mellifera.

  17. Tactile agnosia. Underlying impairment and implications for normal tactile object recognition.

    PubMed

    Reed, C L; Caselli, R J; Farah, M J

    1996-06-01

    In a series of experimental investigations of a subject with a unilateral impairment of tactile object recognition without impaired tactile sensation, several issues were addressed. First, is tactile agnosia secondary to a general impairment of spatial cognition? On tests of spatial ability, including those directed at the same spatial integration process assumed to be taxed by tactile object recognition, the subject performed well, implying a more specific impairment of high level, modality specific tactile perception. Secondly, within the realm of high level tactile perception, is there a distinction between the ability to derive shape ('what') and spatial ('where') information? Our testing showed an impairment confined to shape perception. Thirdly, what aspects of shape perception are impaired in tactile agnosia? Our results indicate that despite accurate encoding of metric length and normal manual exploration strategies, the ability tactually to perceive objects with the impaired hand, deteriorated as the complexity of shape increased. In addition, asymmetrical performance was not found for other body surfaces (e.g. her feet). Our results suggest that tactile shape perception can be disrupted independent of general spatial ability, tactile spatial ability, manual shape exploration, or even the precise perception of metric length in the tactile modality.

  18. The Benefits of Residual Hair Cell Function for Speech and Music Perception in Pediatric Bimodal Cochlear Implant Listeners.

    PubMed

    Cheng, Xiaoting; Liu, Yangwenyi; Wang, Bing; Yuan, Yasheng; Galvin, John J; Fu, Qian-Jie; Shu, Yilai; Chen, Bing

    2018-01-01

    The aim of this study was to investigate the benefits of residual hair cell function for speech and music perception in bimodal pediatric Mandarin-speaking cochlear implant (CI) listeners. Speech and music performance was measured in 35 Mandarin-speaking pediatric CI users for unilateral (CI-only) and bimodal listening. Mandarin speech perception was measured for vowels, consonants, lexical tones, and sentences in quiet. Music perception was measured for melodic contour identification (MCI). Combined electric and acoustic hearing significantly improved MCI and Mandarin tone recognition performance, relative to CI-only performance. For MCI, performance was significantly better with bimodal listening for all semitone spacing conditions ( p < 0.05 in all cases). For tone recognition, bimodal performance was significantly better only for tone 2 (rising; p < 0.05). There were no significant differences between CI-only and CI + HA for vowel, consonant, or sentence recognition. The results suggest that combined electric and acoustic hearing can significantly improve perception of music and Mandarin tones in pediatric Mandarin-speaking CI patients. Music and lexical tone perception depends strongly on pitch perception, and the contralateral acoustic hearing coming from residual hair cell function provided pitch cues that are generally not well preserved in electric hearing.

  19. Face familiarity promotes stable identity recognition: exploring face perception using serial dependence

    PubMed Central

    Kok, Rebecca; Van der Burg, Erik; Rhodes, Gillian; Alais, David

    2017-01-01

    Studies suggest that familiar faces are processed in a manner distinct from unfamiliar faces and that familiarity with a face confers an advantage in identity recognition. Our visual system seems to capitalize on experience to build stable face representations that are impervious to variation in retinal input that may occur due to changes in lighting, viewpoint, viewing distance, eye movements, etc. Emerging evidence also suggests that our visual system maintains a continuous perception of a face's identity from one moment to the next despite the retinal input variations through serial dependence. This study investigates whether interactions occur between face familiarity and serial dependence. In two experiments, participants used a continuous scale to rate attractiveness of unfamiliar and familiar faces (either experimentally learned or famous) presented in rapid sequences. Both experiments revealed robust inter-trial effects in which attractiveness ratings for a given face depended on the preceding face's attractiveness. This inter-trial attractiveness effect was most pronounced for unfamiliar faces. Indeed, when participants were familiar with a given face, attractiveness ratings showed significantly less serial dependence. These results represent the first evidence that familiar faces can resist the temporal integration seen in sequential dependencies and highlight the importance of familiarity to visual cognition. PMID:28405355

  20. Signed reward prediction errors drive declarative learning.

    PubMed

    De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  1. Developing Recognition Programs for Units within Student Affairs.

    ERIC Educational Resources Information Center

    Avery, Cynthia M.

    2001-01-01

    According to many psychologists, the connections between motivation and rewards and recognition are crucial to employee satisfaction. A plan for developing a multi-layered recognition program within a division of student affairs is described. These recognitions programs are designed taking into account the differences in perceptions of awards by…

  2. On-chip learning of hyper-spectral data for real time target recognition

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Daud, T.; Thakoor, A.

    2000-01-01

    As the focus of our present paper, we have used the cascade error projection (CEP) learning algorithm (shown to be hardware-implementable) with on-chip learning (OCL) scheme to obtain three orders of magnitude speed-up in target recognition compared to software-based learning schemes. Thus, it is shown, real time learning as well as data processing for target recognition can be achieved.

  3. Use of intonation contours for speech recognition in noise by cochlear implant recipients.

    PubMed

    Meister, Hartmut; Landwehr, Markus; Pyschny, Verena; Grugel, Linda; Walger, Martin

    2011-05-01

    The corruption of intonation contours has detrimental effects on sentence-based speech recognition in normal-hearing listeners Binns and Culling [(2007). J. Acoust. Soc. Am. 122, 1765-1776]. This paper examines whether this finding also applies to cochlear implant (CI) recipients. The subjects' F0-discrimination and speech perception in the presence of noise were measured, using sentences with regular and inverted F0-contours. The results revealed that speech recognition for regular contours was significantly better than for inverted contours. This difference was related to the subjects' F0-discrimination providing further evidence that the perception of intonation patterns is important for the CI-mediated speech recognition in noise.

  4. Computational approaches to motor learning by imitation.

    PubMed Central

    Schaal, Stefan; Ijspeert, Auke; Billard, Aude

    2003-01-01

    Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379

  5. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.

    PubMed

    Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V

    2018-04-01

    Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.

  6. Deep Neural Networks as a Computational Model for Human Shape Sensitivity

    PubMed Central

    Op de Beeck, Hans P.

    2016-01-01

    Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699

  7. Genetic Mapping in Mice Reveals the Involvement of Pcdh9 in Long-Term Social and Object Recognition and Sensorimotor Development.

    PubMed

    Bruining, Hilgo; Matsui, Asuka; Oguro-Ando, Asami; Kahn, René S; Van't Spijker, Heleen M; Akkermans, Guus; Stiedl, Oliver; van Engeland, Herman; Koopmans, Bastijn; van Lith, Hein A; Oppelaar, Hugo; Tieland, Liselotte; Nonkes, Lourens J; Yagi, Takeshi; Kaneko, Ryosuke; Burbach, J Peter H; Yamamoto, Nobuhiko; Kas, Martien J

    2015-10-01

    Quantitative genetic analysis of basic mouse behaviors is a powerful tool to identify novel genetic phenotypes contributing to neurobehavioral disorders. Here, we analyzed genetic contributions to single-trial, long-term social and nonsocial recognition and subsequently studied the functional impact of an identified candidate gene on behavioral development. Genetic mapping of single-trial social recognition was performed in chromosome substitution strains, a sophisticated tool for detecting quantitative trait loci (QTL) of complex traits. Follow-up occurred by generating and testing knockout (KO) mice of a selected QTL candidate gene. Functional characterization of these mice was performed through behavioral and neurological assessments across developmental stages and analyses of gene expression and brain morphology. Chromosome substitution strain 14 mapping studies revealed an overlapping QTL related to long-term social and object recognition harboring Pcdh9, a cell-adhesion gene previously associated with autism spectrum disorder. Specific long-term social and object recognition deficits were confirmed in homozygous (KO) Pcdh9-deficient mice, while heterozygous mice only showed long-term social recognition impairment. The recognition deficits in KO mice were not associated with alterations in perception, multi-trial discrimination learning, sociability, behavioral flexibility, or fear memory. Rather, KO mice showed additional impairments in sensorimotor development reflected by early touch-evoked biting, rotarod performance, and sensory gating deficits. This profile emerged with structural changes in deep layers of sensory cortices, where Pcdh9 is selectively expressed. This behavior-to-gene study implicates Pcdh9 in cognitive functions required for long-term social and nonsocial recognition. This role is supported by the involvement of Pcdh9 in sensory cortex development and sensorimotor phenotypes. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  8. Neural dynamics of object-based multifocal visual spatial attention and priming: Object cueing, useful-field-of-view, and crowding

    PubMed Central

    Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio

    2015-01-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. PMID:22425615

  9. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding.

    PubMed

    Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio

    2012-08-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    PubMed Central

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420

  11. Coordinate Transformations in Object Recognition

    ERIC Educational Resources Information Center

    Graf, Markus

    2006-01-01

    A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation…

  12. Quest Hierarchy for Hyperspectral Face Recognition

    DTIC Science & Technology

    2011-03-01

    numerous face recognition algorithms available, several very good literature surveys are available that include Abate [29], Samal [110], Kong [18], Zou...Perception, Japan (January 1994). [110] Samal , Ashok and P. Iyengar, Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey

  13. Computational Modeling of Emotions and Affect in Social-Cultural Interaction

    DTIC Science & Technology

    2013-10-02

    acoustic and textual information sources. Second, a cross-lingual study was performed that shed light on how human perception and automatic recognition...speech is produced, a speaker’s pitch and intonational pattern, and word usage. Better feature representation and advanced approaches were used to...recognition performance, and improved our understanding of language/cultural impact on human perception of emotion and automatic classification. • Units

  14. Neurophysiology and functional neuroanatomy of pain perception.

    PubMed

    Schnitzler, A; Ploner, M

    2000-11-01

    The traditional view that the cerebral cortex is not involved in pain processing has been abandoned during the past decades based on anatomic and physiologic investigations in animals, and lesion, functional neuroimaging, and neurophysiologic studies in humans. These studies have revealed an extensive central network associated with nociception that consistently includes the thalamus, the primary (SI) and secondary (SII) somatosensory cortices, the insula, and the anterior cingulate cortex (ACC). Anatomic and electrophysiologic data show that these cortical regions receive direct nociceptive thalamic input. From the results of human studies there is growing evidence that these different cortical structures contribute to different dimensions of pain experience. The SI cortex appears to be mainly involved in sensory-discriminative aspects of pain. The SII cortex seems to have an important role in recognition, learning, and memory of painful events. The insula has been proposed to be involved in autonomic reactions to noxious stimuli and in affective aspects of pain-related learning and memory. The ACC is closely related to pain unpleasantness and may subserve the integration of general affect, cognition, and response selection. The authors review the evidence on which the proposed relationship between cortical areas, pain-related neural activations, and components of pain perception is based.

  15. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  16. Towards Real-Time Speech Emotion Recognition for Affective E-Learning

    ERIC Educational Resources Information Center

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order…

  17. ICPR-2016 - International Conference on Pattern Recognition

    Science.gov Websites

    Learning for Scene Understanding" Speakers ICPR2016 PAPER AWARDS Best Piero Zamperoni Student Paper -Paced Dictionary Learning for Cross-Domain Retrieval and Recognition Xu, Dan; Song, Jingkuan; Alameda discussions on recent advances in the fields of Pattern Recognition, Machine Learning and Computer Vision, and

  18. Scene recognition based on integrating active learning with dictionary learning

    NASA Astrophysics Data System (ADS)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  19. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  20. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  1. Recognition of Prior Learning at the Centre of a National Strategy: Tensions between Professional Gains and Personal Development

    ERIC Educational Resources Information Center

    Lima, Licínio C.; Guimarães, Paula

    2016-01-01

    This paper focuses on recognition of prior learning as part of a national policy based on European Union guidelines for lifelong learning, and it explains how recognition of prior learning has been perceived since it was implemented in Portugal in 2000. Data discussed are the result of a mixed method research project that surveyed adult learners,…

  2. Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression

    PubMed Central

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2012-01-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression. PMID:20617892

  3. Spaced learning enhances subsequent recognition memory by reducing neural repetition suppression.

    PubMed

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2011-07-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half under the massed learning condition (i.e., four consecutive repetitions with jittered interstimulus interval) and the other half under the spaced learning condition (i.e., the four repetitions were interleaved). Recognition memory tests afterward revealed a significant spacing effect: Participants recognized more items learned under the spaced learning condition than under the massed learning condition. Successful face memory encoding was associated with stronger activation in the bilateral fusiform gyrus, which showed a significant repetition suppression effect modulated by subsequent memory status and spaced learning. Specifically, remembered faces showed smaller repetition suppression than forgotten faces under both learning conditions, and spaced learning significantly reduced repetition suppression. These results suggest that spaced learning enhances recognition memory by reducing neural repetition suppression.

  4. Melodic Contour Identification and Music Perception by Cochlear Implant Users

    PubMed Central

    Galvin, John J.; Fu, Qian-Jie; Shannon, Robert V.

    2013-01-01

    Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation. PMID:19673835

  5. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    PubMed

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  6. Large-Corpus Phoneme and Word Recognition and the Generality of Lexical Context in CVC Word Perception

    ERIC Educational Resources Information Center

    Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.

    2014-01-01

    Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…

  7. Deep Learning-Based Noise Reduction Approach to Improve Speech Intelligibility for Cochlear Implant Recipients.

    PubMed

    Lai, Ying-Hui; Tsao, Yu; Lu, Xugang; Chen, Fei; Su, Yu-Ting; Chen, Kuang-Chao; Chen, Yu-Hsuan; Chen, Li-Ching; Po-Hung Li, Lieber; Lee, Chin-Hui

    2018-01-20

    We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients. The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing. The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions. When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion for the key speech envelope information, thus, improving speech recognition more effectively for Mandarin CI recipients. The results suggest that the proposed deep learning-based NR approach can potentially be integrated into existing CI signal processors to overcome the degradation of speech perception caused by noise.

  8. Neural networks for learning and prediction with applications to remote sensing and speech perception

    NASA Astrophysics Data System (ADS)

    Gjaja, Marin N.

    1997-11-01

    Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An unsupervised neural network model is proposed that embodies two principal hypotheses supported by experimental data--that sensory experience guides language-specific development of an auditory neural map and that a population vector can predict psychological phenomena based on map cell activities. Model simulations show how a nonuniform distribution of map cell firing preferences can develop from language-specific input and give rise to the magnet effect.

  9. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  10. Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias.

    PubMed

    Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk

    2017-07-01

    This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.

  11. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  12. Signed reward prediction errors drive declarative learning

    PubMed Central

    Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493

  13. Prestimulus default mode activity influences depth of processing and recognition in an emotional memory task.

    PubMed

    Soravia, Leila M; Witmer, Joëlle S; Schwab, Simon; Nakataki, Masahito; Dierks, Thomas; Wiest, Roland; Henke, Katharina; Federspiel, Andrea; Jann, Kay

    2016-03-01

    Low self-referential thoughts are associated with better concentration, which leads to deeper encoding and increases learning and subsequent retrieval. There is evidence that being engaged in externally rather than internally focused tasks is related to low neural activity in the default mode network (DMN) promoting open mind and the deep elaboration of new information. Thus, reduced DMN activity should lead to enhanced concentration, comprehensive stimulus evaluation including emotional categorization, deeper stimulus processing, and better long-term retention over one whole week. In this fMRI study, we investigated brain activation preceding and during incidental encoding of emotional pictures and on subsequent recognition performance. During fMRI, 24 subjects were exposed to 80 pictures of different emotional valence and subsequently asked to complete an online recognition task one week later. Results indicate that neural activity within the medial temporal lobes during encoding predicts subsequent memory performance. Moreover, a low activity of the default mode network preceding incidental encoding leads to slightly better recognition performance independent of the emotional perception of a picture. The findings indicate that the suppression of internally-oriented thoughts leads to a more comprehensive and thorough evaluation of a stimulus and its emotional valence. Reduced activation of the DMN prior to stimulus onset is associated with deeper encoding and enhanced consolidation and retrieval performance even one week later. Even small prestimulus lapses of attention influence consolidation and subsequent recognition performance. © 2015 Wiley Periodicals, Inc.

  14. Goal-seeking neural net for recall and recognition

    NASA Astrophysics Data System (ADS)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  15. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    PubMed

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.

  16. Voice gender discrimination provides a measure of more than pitch-related perception in cochlear implant users.

    PubMed

    Li, Tianhao; Fu, Qian-Jie

    2011-08-01

    (1) To investigate whether voice gender discrimination (VGD) could be a useful indicator of the spectral and temporal processing abilities of individual cochlear implant (CI) users; (2) To examine the relationship between VGD and speech recognition with CI when comparable acoustic cues are used for both perception processes. VGD was measured using two talker sets with different inter-gender fundamental frequencies (F(0)), as well as different acoustic CI simulations. Vowel and consonant recognition in quiet and noise were also measured and compared with VGD performance. Eleven postlingually deaf CI users. The results showed that (1) mean VGD performance differed for different stimulus sets, (2) VGD and speech recognition performance varied among individual CI users, and (3) individual VGD performance was significantly correlated with speech recognition performance under certain conditions. VGD measured with selected stimulus sets might be useful for assessing not only pitch-related perception, but also spectral and temporal processing by individual CI users. In addition to improvements in spectral resolution and modulation detection, the improvement in higher modulation frequency discrimination might be particularly important for CI users in noisy environments.

  17. Evaluation and Effectiveness of Pain Recognition and Management Training for Staff Working in Learning Disability Services

    ERIC Educational Resources Information Center

    Mackey, Ellen; Dodd, Karen

    2011-01-01

    Following Beacroft & Dodd's (2009) audit of pain recognition and management within learning disability services in Surrey, it was recommended that learning disability services should receive training in pain recognition and management. Two hundred and seventy-five services were invited to participate, of which 197 services in Surrey accepted…

  18. Morphing Images: A Potential Tool for Teaching Word Recognition to Children with Severe Learning Difficulties

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2005-01-01

    Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…

  19. The Significance of the Learner Profile in Recognition of Prior Learning

    ERIC Educational Resources Information Center

    Snyman, Marici; van den Berg, Geesje

    2018-01-01

    Recognition of prior learning (RPL) is based on the principle that valuable learning, worthy of recognition, takes place outside formal education. In the context of higher education, legislation provides an enabling framework for the implementation of RPL. However, RPL will only gain its rightful position if it can ensure the RPL candidates'…

  20. [Perception of emotional intonation of noisy speech signal with different acoustic parameters by adults of different age and gender].

    PubMed

    Dmitrieva, E S; Gel'man, V Ia

    2011-01-01

    The listener-distinctive features of recognition of different emotional intonations (positive, negative and neutral) of male and female speakers in the presence or absence of background noise were studied in 49 adults aged 20-79 years. In all the listeners noise produced the most pronounced decrease in recognition accuracy for positive emotional intonation ("joy") as compared to other intonations, whereas it did not influence the recognition accuracy of "anger" in 65-79-year-old listeners. The higher emotion recognition rates of a noisy signal were observed for speech emotional intonations expressed by female speakers. Acoustic characteristics of noisy and clear speech signals underlying perception of speech emotional prosody were found for adult listeners of different age and gender.

  1. Mirror self-face perception in individuals with schizophrenia: Feelings of strangeness associated with one's own image.

    PubMed

    Bortolon, Catherine; Capdevielle, Delphine; Altman, Rosalie; Macgregor, Alexandra; Attal, Jérôme; Raffard, Stéphane

    2017-07-01

    Self-face recognition is crucial for sense of identity and for maintaining a coherent sense of self. Most of our daily life experiences with the image of our own face happen when we look at ourselves in the mirror. However, to date, mirror self-perception in schizophrenia has received little attention despite evidence that face recognition deficits and self abnormalities have been described in schizophrenia. Thus, this study aims to investigate mirror self-face perception in schizophrenia patients and its correlation with clinical symptoms. Twenty-four schizophrenia patients and twenty-five healthy controls were explicitly requested to describe their image in detail during 2min whilst looking at themselves in a mirror. Then, they were asked to report whether they experienced any self-face recognition difficulties. Results showed that schizophrenia patients reported more feelings of strangeness towards their face compared to healthy controls (U=209.5, p=0.048, r=0.28), but no statistically significant differences were found regarding misidentification (p=0.111) and failures in recognition (p=0.081). Symptoms such as hallucinations, somatic concerns and depression were also associated with self-face perception abnormalities (all p-values>0.05). Feelings of strangeness toward one's own face in schizophrenia might be part of a familiar face perception deficit or a more global self-disturbance, which is characterized by a loss of self-other boundaries and has been associated with abnormal body experiences and first rank symptoms. Regarding this last hypothesis, multisensorial integration might have an impact on the way patients perceive themselves since it has an important role in mirror self-perception. Copyright © 2017. Published by Elsevier B.V.

  2. Learning during Processing: Word Learning Doesn't Wait for Word Recognition to Finish

    ERIC Educational Resources Information Center

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed…

  3. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    PubMed

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  4. Tone perception in Mandarin-speaking school age children with otitis media with effusion

    PubMed Central

    McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-01-01

    Objectives The present study explored tone perception ability in school age Mandarin-speaking children with otitis media with effusion (OME) in noisy listening environments. The study investigated the interaction effects of noise, tone type, age, and hearing status on monaural tone perception, and assessed the application of a hierarchical clustering algorithm for profiling hearing impairment in children with OME. Methods Forty-one children with normal hearing and normal middle ear status and 84 children with OME with or without hearing loss participated in this study. The children with OME were further divided into two subgroups based on their severity and pattern of hearing loss using a hierarchical clustering algorithm. Monaural tone recognition was measured using a picture-identification test format incorporating six sets of monosyllabic words conveying four lexical tones under speech spectrum noise, with the signal-to-noise ratio (SNR) conditions ranging from -9 to -21 dB. Results Linear correlation indicated tone recognition thresholds of children with OME were significantly correlated with age and pure tone hearing thresholds at every frequency tested. Children with hearing thresholds less affected by OME performed similarly to their peers with normal hearing. Tone recognition thresholds of children with auditory status more affected by OME were significantly inferior to those of children with normal hearing or with minor hearing loss. Younger children demonstrated poorer tone recognition performance than older children with OME. A mixed design repeated-measure ANCOVA showed significant main effects of listening condition, hearing status, and tone type on tone recognition. Contrast comparisons revealed that tone recognition scores were significantly better under -12 dB SNR than under -15 dB SNR conditions and tone recognition scores were significantly worse under -18 dB SNR than those obtained under -15 dB SNR conditions. Tone 1 was the easiest tone to identify and Tone 3 was the most difficult tone to identify for all participants, when considering -12, -15, and -18 dB SNR as within-subject variables. The interaction effect between hearing status and tone type indicated that children with greater levels of OME-related hearing loss had more impaired tone perception of Tone 1 and Tone 2 compared to their peers with lesser levels of OME-related hearing loss. However, tone perception of Tone 3 and Tone 4 remained similar among all three groups. Tone 2 and Tone 3 were the most perceptually difficult tones for children with or without OME-related hearing loss in all listening conditions. Conclusions The hierarchical clustering algorithm demonstrated usefulness in risk stratification for tone perception deficiency in children with OME-related hearing loss. There was marked impairment in tone perception in noise for children with greater levels of OME-related hearing loss. Monaural lexical tone perception in younger children was more vulnerable to noise and OME-related hearing loss than that in older children. PMID:28829840

  5. Tone perception in Mandarin-speaking school age children with otitis media with effusion.

    PubMed

    Cai, Ting; McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-01-01

    The present study explored tone perception ability in school age Mandarin-speaking children with otitis media with effusion (OME) in noisy listening environments. The study investigated the interaction effects of noise, tone type, age, and hearing status on monaural tone perception, and assessed the application of a hierarchical clustering algorithm for profiling hearing impairment in children with OME. Forty-one children with normal hearing and normal middle ear status and 84 children with OME with or without hearing loss participated in this study. The children with OME were further divided into two subgroups based on their severity and pattern of hearing loss using a hierarchical clustering algorithm. Monaural tone recognition was measured using a picture-identification test format incorporating six sets of monosyllabic words conveying four lexical tones under speech spectrum noise, with the signal-to-noise ratio (SNR) conditions ranging from -9 to -21 dB. Linear correlation indicated tone recognition thresholds of children with OME were significantly correlated with age and pure tone hearing thresholds at every frequency tested. Children with hearing thresholds less affected by OME performed similarly to their peers with normal hearing. Tone recognition thresholds of children with auditory status more affected by OME were significantly inferior to those of children with normal hearing or with minor hearing loss. Younger children demonstrated poorer tone recognition performance than older children with OME. A mixed design repeated-measure ANCOVA showed significant main effects of listening condition, hearing status, and tone type on tone recognition. Contrast comparisons revealed that tone recognition scores were significantly better under -12 dB SNR than under -15 dB SNR conditions and tone recognition scores were significantly worse under -18 dB SNR than those obtained under -15 dB SNR conditions. Tone 1 was the easiest tone to identify and Tone 3 was the most difficult tone to identify for all participants, when considering -12, -15, and -18 dB SNR as within-subject variables. The interaction effect between hearing status and tone type indicated that children with greater levels of OME-related hearing loss had more impaired tone perception of Tone 1 and Tone 2 compared to their peers with lesser levels of OME-related hearing loss. However, tone perception of Tone 3 and Tone 4 remained similar among all three groups. Tone 2 and Tone 3 were the most perceptually difficult tones for children with or without OME-related hearing loss in all listening conditions. The hierarchical clustering algorithm demonstrated usefulness in risk stratification for tone perception deficiency in children with OME-related hearing loss. There was marked impairment in tone perception in noise for children with greater levels of OME-related hearing loss. Monaural lexical tone perception in younger children was more vulnerable to noise and OME-related hearing loss than that in older children.

  6. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  7. Speech-perception training for older adults with hearing loss impacts word recognition and effort.

    PubMed

    Kuchinsky, Stefanie E; Ahlstrom, Jayne B; Cute, Stephanie L; Humes, Larry E; Dubno, Judy R; Eckert, Mark A

    2014-10-01

    The current pupillometry study examined the impact of speech-perception training on word recognition and cognitive effort in older adults with hearing loss. Trainees identified more words at the follow-up than at the baseline session. Training also resulted in an overall larger and faster peaking pupillary response, even when controlling for performance and reaction time. Perceptual and cognitive capacities affected the peak amplitude of the pupil response across participants but did not diminish the impact of training on the other pupil metrics. Thus, we demonstrated that pupillometry can be used to characterize training-related and individual differences in effort during a challenging listening task. Importantly, the results indicate that speech-perception training not only affects overall word recognition, but also a physiological metric of cognitive effort, which has the potential to be a biomarker of hearing loss intervention outcome. Copyright © 2014 Society for Psychophysiological Research.

  8. 3-Dimensional Scene Perception during Active Electrolocation in a Weakly Electric Pulse Fish

    PubMed Central

    von der Emde, Gerhard; Behr, Katharina; Bouton, Béatrice; Engelmann, Jacob; Fetz, Steffen; Folde, Caroline

    2010-01-01

    Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4 cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation. PMID:20577635

  9. Recognition-by-Components: A Theory of Human Image Understanding.

    ERIC Educational Resources Information Center

    Biederman, Irving

    1987-01-01

    The theory proposed (recognition-by-components) hypothesizes the perceptual recognition of objects to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components. Experiments on the perception of briefly presented pictures support the theory. (Author/LMO)

  10. Rehabilitation of face-processing skills in an adolescent with prosopagnosia: Evaluation of an online perceptual training programme.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Mole, Joseph A; Ainge, James A; Gregory, Nicola J; Bobak, Anna K; Bussunt, Amanda

    2015-01-01

    In this paper we describe the case of EM, a female adolescent who acquired prosopagnosia following encephalitis at the age of eight. Initial neuropsychological and eye-movement investigations indicated that EM had profound difficulties in face perception as well as face recognition. EM underwent 14 weeks of perceptual training in an online programme that attempted to improve her ability to make fine-grained discriminations between faces. Following training, EM's face perception skills had improved, and the effect generalised to untrained faces. Eye-movement analyses also indicated that EM spent more time viewing the inner facial features post-training. Examination of EM's face recognition skills revealed an improvement in her recognition of personally-known faces when presented in a laboratory-based test, although the same gains were not noted in her everyday experiences with these faces. In addition, EM did not improve on a test assessing the recognition of newly encoded faces. One month after training, EM had maintained the improvement on the eye-tracking test, and to a lesser extent, her performance on the familiar faces test. This pattern of findings is interpreted as promising evidence that the programme can improve face perception skills, and with some adjustments, may at least partially improve face recognition skills.

  11. Determinants of naming latencies, object comprehension times, and new norms for the Russian standardized set of the colorized version of the Snodgrass and Vanderwart pictures.

    PubMed

    Bonin, Patrick; Guillemard-Tsaparina, Diana; Méot, Alain

    2013-09-01

    We report object-naming and object recognition times collected from Russian native speakers for the colorized version of the Snodgrass and Vanderwart (Journal of Experimental Psychology: Human Learning and Memory 6:174-215, 1980) pictures (Rossion & Pourtois, Perception 33:217-236, 2004). New norms for image variability, body-object interaction [BOI], and subjective frequency collected in Russian, as well as new name agreement scores for the colorized pictures in French, are also reported. In both object-naming and object comprehension times, the name agreement, image agreement, and age-of-acquisition variables made significant independent contributions. Objective word frequency was reliable in object-naming latencies only. The variables of image variability, BOI, and subjective frequency were not significant in either object naming or object comprehension. Finally, imageability was reliable in both tasks. The new norms and object-naming and object recognition times are provided as supplemental materials.

  12. Auditory emotion recognition impairments in Schizophrenia: Relationship to acoustic features and cognition

    PubMed Central

    Gold, Rinat; Butler, Pamela; Revheim, Nadine; Leitman, David; Hansen, John A.; Gur, Ruben; Kantrowitz, Joshua T.; Laukka, Petri; Juslin, Patrik N.; Silipo, Gail S.; Javitt, Daniel C.

    2013-01-01

    Objective Schizophrenia is associated with deficits in ability to perceive emotion based upon tone of voice. The basis for this deficit, however, remains unclear and assessment batteries remain limited. We evaluated performance in schizophrenia on a novel voice emotion recognition battery with well characterized physical features, relative to impairments in more general emotional and cognitive function. Methods We studied in a primary sample of 92 patients relative to 73 controls. Stimuli were characterized according to both intended emotion and physical features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched controls, and 188 general comparison subjects. Results Patients showed significant, large effect size deficits in voice emotion recognition (F=25.4, p<.00001, d=1.1), and were preferentially impaired in recognition of emotion based upon pitch-, but not intensity-features (group X feature interaction: F=7.79, p=.006). Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=56, p<.0001) and within (r=.47, p<.0001) group. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions The present study demonstrates impairments in auditory emotion recognition in schizophrenia relative to acoustic features of underlying stimuli. Furthermore, it provides tools and highlights the need for greater attention to physical features of stimuli used for study of social cognition in neuropsychiatric disorders. PMID:22362394

  13. How and what do autistic children see? Emotional, perceptive and social peculiarities reflected in more recent examinations of the visual perception and the process of observation in autistic children.

    PubMed

    Dalferth, M

    1989-01-01

    Autistic symptoms become apparent at the earliest during the 2nd-3rd month of life when the spontaneous registration of the meaning of specific-visual stimuli (eyes, configuration of the mother's face) do not occur and also learning experiences by reason of mimic and gestures repeatedly shown by the interaction partner can neither evoke a social smile nor stimulate anticipational behaviour. Even with increasing age an empathetic perception of feelings in the corresponding mimical gesticular formation is very difficult and they themselves are only insufficiently able to express their own feelings intelligibly to everyone. As mimic and gestures are, however, visually perceived, the autistic perceptive child's competence is of great importance. On the basis of the examinations of visual perception (retinal pathology, tunnel vision) perceptual processing (recognition of feelings, sex and age) and the disintegration of multimodal stimuli it can be presumed that social and emotional deficits are to be seen in connection with a deviant perceptive interpretation of the world and irregular processing on the basis of a neuro-biological handicap (the absence of a genetic determined reference-system for emotionally significant stimuli), which can have various causes (comp. Gillberg 1988) and also impede the adequate expression of feelings in mimic, gestures and voice. Autistic people see, experience and understand the world in a specific way in which and by which they differ from non-handicapped people.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  15. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Rapid effects of dorsal hippocampal G-protein coupled estrogen receptor on learning in female mice.

    PubMed

    Lymer, Jennifer; Robinson, Alana; Winters, Boyer D; Choleris, Elena

    2017-03-01

    Through rapid mechanisms of action, estrogens affect learning and memory processes. It has been shown that 17β-estradiol and an Estrogen Receptor (ER) α agonist enhances performance in social recognition, object recognition, and object placement tasks when administered systemically or infused in the dorsal hippocampus. In contrast, systemic and dorsal hippocampal ERβ activation only promote spatial learning. In addition, 17β-estradiol, the ERα and the G-protein coupled estrogen receptor (GPER) agonists increase dendritic spine density in the CA1 hippocampus. Recently, we have shown that selective systemic activation of the GPER also rapidly facilitated social recognition, object recognition, and object placement learning in female mice. Whether activation the GPER specifically in the dorsal hippocampus can also rapidly improve learning and memory prior to acquisition is unknown. Here, we investigated the rapid effects of infusion of the GPER agonist, G-1 (dose: 50nM, 100nM, 200nM), in the dorsal hippocampus on social recognition, object recognition, and object placement learning tasks in home cage. These paradigms were completed within 40min, which is within the range of rapid estrogenic effects. Dorsal hippocampal administration of G-1 improved social (doses: 50nM, 200nM G-1) and object (dose: 200nM G-1) recognition with no effect on object placement. Additionally, when spatial cues were minimized by testing in a Y-apparatus, G-1 administration promoted social (doses: 100nM, 200nM G-1) and object (doses: 50nM, 100nM, 200nM G-1) recognition. Therefore, like ERα, the GPER in the hippocampus appears to be sufficient for the rapid facilitation of social and object recognition in female mice, but not for the rapid facilitation of object placement learning. Thus, the GPER in the dorsal hippocampus is involved in estrogenic mediation of learning and memory and these effects likely occur through rapid signalling mechanisms. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Geometry of the perceptual space

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John

    1999-09-01

    The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.

  18. Transfer Learning with Convolutional Neural Networks for SAR Ship Recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Liu, Jia; Heng, Wang; Ren, Kaijun; Song, Junqiang

    2018-03-01

    Ship recognition is the backbone of marine surveillance systems. Recent deep learning methods, e.g. Convolutional Neural Networks (CNNs), have shown high performance for optical images. Learning CNNs, however, requires a number of annotated samples to estimate numerous model parameters, which prevents its application to Synthetic Aperture Radar (SAR) images due to the limited annotated training samples. Transfer learning has been a promising technique for applications with limited data. To this end, a novel SAR ship recognition method based on CNNs with transfer learning has been developed. In this work, we firstly start with a CNNs model that has been trained in advance on Moving and Stationary Target Acquisition and Recognition (MSTAR) database. Next, based on the knowledge gained from this image recognition task, we fine-tune the CNNs on a new task to recognize three types of ships in the OpenSARShip database. The experimental results show that our proposed approach can obviously increase the recognition rate comparing with the result of merely applying CNNs. In addition, compared to existing methods, the proposed method proves to be very competitive and can learn discriminative features directly from training data instead of requiring pre-specification or pre-selection manually.

  19. Relative contributions of acoustic temporal fine structure and envelope cues for lexical tone perception in noise

    PubMed Central

    Qi, Beier; Mao, Yitao; Liu, Jiaxing; Liu, Bo; Xu, Li

    2017-01-01

    Previous studies have shown that lexical tone perception in quiet relies on the acoustic temporal fine structure (TFS) but not on the envelope (E) cues. The contributions of TFS to speech recognition in noise are under debate. In the present study, Mandarin tone tokens were mixed with speech-shaped noise (SSN) or two-talker babble (TTB) at five signal-to-noise ratios (SNRs; −18 to +6 dB). The TFS and E were then extracted from each of the 30 bands using Hilbert transform. Twenty-five combinations of TFS and E from the sound mixtures of the same tone tokens at various SNRs were created. Twenty normal-hearing, native-Mandarin-speaking listeners participated in the tone-recognition test. Results showed that tone-recognition performance improved as the SNRs in either TFS or E increased. The masking effects on tone perception for the TTB were weaker than those for the SSN. For both types of masker, the perceptual weights of TFS and E in tone perception in noise was nearly equivalent, with E playing a slightly greater role than TFS. Thus, the relative contributions of TFS and E cues to lexical tone perception in noise or in competing-talker maskers differ from those in quiet and those to speech perception of non-tonal languages. PMID:28599529

  20. Learned Non-Rigid Object Motion is a View-Invariant Cue to Recognizing Novel Objects

    PubMed Central

    Chuang, Lewis L.; Vuong, Quoc C.; Bülthoff, Heinrich H.

    2012-01-01

    There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint. PMID:22661939

  1. Improved perception of music with a harmonic based algorithm for cochlear implants.

    PubMed

    Li, Xing; Nie, Kaibao; Imennov, Nikita S; Rubinstein, Jay T; Atlas, Les E

    2013-07-01

    The lack of fine structure information in conventional cochlear implant (CI) encoding strategies presumably contributes to the generally poor music perception with CIs. To improve CI users' music perception, a harmonic-single-sideband-encoder (HSSE) strategy was developed , which explicitly tracks the harmonics of a single musical source and transforms them into modulators conveying both amplitude and temporal fine structure cues to electrodes. To investigate its effectiveness, vocoder simulations of HSSE and the conventional continuous-interleaved-sampling (CIS) strategy were implemented. Using these vocoders, five normal-hearing subjects' melody and timbre recognition performance were evaluated: a significant benefit of HSSE to both melody (p < 0.002) and timbre (p < 0.026) recognition was found. Additionally, HSSE was acutely tested in eight CI subjects. On timbre recognition, a significant advantage of HSSE over the subjects' clinical strategy was demonstrated: the largest improvement was 35% and the mean 17% (p < 0.013). On melody recognition, two subjects showed 20% improvement with HSSE; however, the mean improvement of 7% across subjects was not significant (p > 0.090). To quantify the temporal cues delivered to the auditory nerve, the neural spike patterns evoked by HSSE and CIS for one melody stimulus were simulated using an auditory nerve model. Quantitative analysis demonstrated that HSSE can convey temporal pitch cues better than CIS. The results suggest that HSSE is a promising strategy to enhance music perception with CIs.

  2. The Memory Fitness Program: Cognitive Effects of a Healthy Aging Intervention

    PubMed Central

    Miller, Karen J.; Siddarth, Prabha; Gaines, Jean M.; Parrish, John M.; Ercoli, Linda M.; Marx, Katherine; Ronch, Judah; Pilgram, Barbara; Burke, Kasey; Barczak, Nancy; Babcock, Bridget; Small, Gary W.

    2014-01-01

    Context Age-related memory decline affects a large proportion of older adults. Cognitive training, physical exercise, and other lifestyle habits may help to minimize self-perception of memory loss and a decline in objective memory performance. Objective The purpose of this study was to determine whether a 6-week educational program on memory training, physical activity, stress reduction, and healthy diet led to improved memory performance in older adults. Design A convenience sample of 115 participants (mean age: 80.9 [SD: 6.0 years]) was recruited from two continuing care retirement communities. The intervention consisted of 60-minute classes held twice weekly with 15–20 participants per class. Testing of both objective and subjective cognitive performance occurred at baseline, preintervention, and postintervention. Objective cognitive measures evaluated changes in five domains: immediate verbal memory, delayed verbal memory, retention of verbal information, memory recognition, and verbal fluency. A standardized metamemory instrument assessed four domains of memory self-awareness: frequency and severity of forgetting, retrospective functioning, and mnemonics use. Results The intervention program resulted in significant improvements on objective measures of memory, including recognition of word pairs (t[114] = 3.62, p < 0.001) and retention of verbal information from list learning (t[114] = 2.98, p < 0.01). No improvement was found for verbal fluency. Regarding subjective memory measures, the retrospective functioning score increased significantly following the intervention (t[114] = 4.54, p < 0.0001), indicating perception of a better memory. Conclusions These findings indicate that a 6-week healthy lifestyle program can improve both encoding and recalling of new verbal information, as well as self-perception of memory ability in older adults residing in continuing care retirement communities. PMID:21765343

  3. Intraspecific Variation in Learning: Worker Wasps Are Less Able to Learn and Remember Individual Conspecific Faces than Queen Wasps.

    PubMed

    Tibbetts, Elizabeth A; Injaian, Allison; Sheehan, Michael J; Desjardins, Nicole

    2018-05-01

    Research on individual recognition often focuses on species-typical recognition abilities rather than assessing intraspecific variation in recognition. As individual recognition is cognitively costly, the capacity for recognition may vary within species. We test how individual face recognition differs between nest-founding queens (foundresses) and workers in Polistes fuscatus paper wasps. Individual recognition mediates dominance interactions among foundresses. Three previously published experiments have shown that foundresses (1) benefit by advertising their identity with distinctive facial patterns that facilitate recognition, (2) have robust memories of individuals, and (3) rapidly learn to distinguish between face images. Like foundresses, workers have variable facial patterns and are capable of individual recognition. However, worker dominance interactions are muted. Therefore, individual recognition may be less important for workers than for foundresses. We find that (1) workers with unique faces receive amounts of aggression similar to those of workers with common faces, indicating that wasps do not benefit from advertising their individual identity with a unique appearance; (2) workers lack robust memories for individuals, as they cannot remember unique conspecifics after a 6-day separation; and (3) workers learn to distinguish between facial images more slowly than foundresses during training. The recognition differences between foundresses and workers are notable because Polistes lack discrete castes; foundresses and workers are morphologically similar, and workers can take over as queens. Overall, social benefits and receiver capacity for individual recognition are surprisingly plastic.

  4. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    PubMed

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  5. From Birdsong to Human Speech Recognition: Bayesian Inference on a Hierarchy of Nonlinear Dynamical Systems

    PubMed Central

    Yildiz, Izzet B.; von Kriegstein, Katharina; Kiebel, Stefan J.

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. PMID:24068902

  6. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

    PubMed

    Yildiz, Izzet B; von Kriegstein, Katharina; Kiebel, Stefan J

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.

  7. Rapid response learning of brand logo priming: Evidence that brand priming is not dominated by rapid response learning.

    PubMed

    Boehm, Stephan G; Smith, Ciaran; Muench, Niklas; Noble, Kirsty; Atherton, Catherine

    2017-08-31

    Repetition priming increases the accuracy and speed of responses to repeatedly processed stimuli. Repetition priming can result from two complementary sources: rapid response learning and facilitation within perceptual and conceptual networks. In conceptual classification tasks, rapid response learning dominates priming of object recognition, but it does not dominate priming of person recognition. This suggests that the relative engagement of network facilitation and rapid response learning depends on the stimulus domain. Here, we addressed the importance of the stimulus domain for rapid response learning by investigating priming in another domain, brands. In three experiments, participants performed conceptual decisions for brand logos. Strong priming was present, but it was not dominated by rapid response learning. These findings add further support to the importance of the stimulus domain for the relative importance of network facilitation and rapid response learning, and they indicate that brand priming is more similar to person recognition priming than object recognition priming, perhaps because priming of both brands and persons requires individuation.

  8. Effects of Problem-Based Learning on Recognition Learning and Transfer Accounting for GPA and Goal Orientation

    ERIC Educational Resources Information Center

    Bergstrom, Cassendra M.; Pugh, Kevin J.; Phillips, Michael M.; Machlev, Moshe

    2016-01-01

    Conflicting research results have stirred controversy over the effectiveness of problem-based learning (PBL) compared to direct instruction at fostering content learning, particularly for novices. We addressed this by investigating effectiveness with respect to recognition learning and transfer and conducting an aptitude-treatment interaction…

  9. Use of Handwriting Recognition Technologies in Tablet-Based Learning Modules for First Grade Education

    ERIC Educational Resources Information Center

    Yanikoglu, Berrin; Gogus, Aytac; Inal, Emre

    2017-01-01

    Learning through modules on a tablet helps students participate effectively in learning activities in classrooms and provides flexibility in the learning process. This study presents the design and evaluation of an application that is based on handwriting recognition technologies and e-content for the developed learning modules. The application…

  10. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Colour influences perception of facial emotions but this effect is impaired in healthy ageing and schizophrenia.

    PubMed

    Silver, Henry; Bilker, Warren B

    2015-01-01

    Social cognition is commonly assessed by identification of emotions in facial expressions. Presence of colour, a salient feature of stimuli, might influence emotional face perception. We administered 2 tests of facial emotion recognition, the Emotion Recognition Test (ER40) using colour pictures and the Penn Emotional Acuity Test using monochromatic pictures, to 37 young healthy, 39 old healthy and 37 schizophrenic men. Among young healthy individuals recognition of emotions was more accurate and faster in colour than in monochromatic pictures. Compared to the younger group, older healthy individuals revealed impairment in identification of sad expressions in colour but not monochromatic pictures. Schizophrenia patients showed greater impairment in colour than monochromatic pictures of neutral and sad expressions and overall total score compared to both healthy groups. Patients showed significant correlations between cognitive impairment and perception of emotion in colour but not monochromatic pictures. Colour enhances perception of general emotional clues and this contextual effect is impaired in healthy ageing and schizophrenia. The effects of colour need to be considered in interpreting and comparing studies of emotion perception. Coloured face stimuli may be more sensitive to emotion processing impairments but less selective for emotion-specific information than monochromatic stimuli. This may impact on their utility in early detection of impairments and investigations of underlying mechanisms.

  12. Recognition of strong earthquake-prone areas with a single learning class

    NASA Astrophysics Data System (ADS)

    Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.

    2017-05-01

    This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.

  13. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  14. Recognition of Prior Learning, Self-Realisation and Identity within Axel Honneth's Theory of Recognition

    ERIC Educational Resources Information Center

    Sandberg, Fredrik; Kubiak, Chris

    2013-01-01

    This paper argues for the significance of Axel Honneth's theory of recognition for understanding recognition of prior learning (RPL). Case studies of the experiences of RPL by paraprofessional workers in health and social care in the UK and Sweden are used to explicate this significance. The results maintain that there are varying conditions of…

  15. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  16. M-Learning: A Psychometric Study of the Mobile Learning Perception Scale and the Relationships between Teachers' Perceptions and School Level

    ERIC Educational Resources Information Center

    Roche, Allyn J.

    2013-01-01

    The purpose of this research was to evaluate the psychometric properties of Uzunboylu and Ozdamli (2011) Mobile Learning Perception Scale (MLPS) in order to determine whether it was an acceptable instrument to measure U.S. teachers' perception of mobile learning (m-learning) in the classroom. A second purpose was to determine if relationships…

  17. Caricature generalization benefits for faces learned with enhanced idiosyncratic shape or texture.

    PubMed

    Itz, Marlena L; Schweinberger, Stefan R; Kaufmann, Jürgen M

    2017-02-01

    Recent findings show benefits for learning and subsequent recognition of faces caricatured in shape or texture, but there is little evidence on whether this caricature learning advantage generalizes to recognition of veridical counterparts at test. Moreover, it has been reported that there is a relatively higher contribution of texture information, at the expense of shape information, for familiar compared to unfamiliar face recognition. The aim of this study was to examine whether veridical faces are recognized better when they were learned as caricatures compared to when they were learned as veridicals-what we call a caricature generalization benefit. Photorealistic facial stimuli derived from a 3-D camera system were caricatured selectively in either shape or texture by 50 %. Faces were learned across different images either as veridicals, shape caricatures, or texture caricatures. At test, all learned and novel faces were presented as previously unseen frontal veridicals, and participants performed an old-new task. We assessed accuracies, reaction times, and face-sensitive event-related potentials (ERPs). Faces learned as caricatures were recognized more accurately than faces learned as veridicals. At learning, N250 and LPC were largest for shape caricatures, suggesting encoding advantages of distinctive facial shape. At test, LPC was largest for faces that had been learned as texture caricatures, indicating the importance of texture for familiar face recognition. Overall, our findings demonstrate that caricature learning advantages can generalize to and, importantly, improve recognition of veridical versions of faces.

  18. Multi-modal imaging predicts memory performance in normal aging and cognitive decline.

    PubMed

    Walhovd, K B; Fjell, A M; Dale, A M; McEvoy, L K; Brewer, J; Karow, D S; Salmon, D P; Fennema-Notestine, C

    2010-07-01

    This study (n=161) related morphometric MR imaging, FDG-PET and APOE genotype to memory scores in normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD). Stepwise regression analyses focused on morphometric and metabolic characteristics of the episodic memory network: hippocampus, entorhinal, parahippocampal, retrosplenial, posterior cingulate, precuneus, inferior parietal, and lateral orbitofrontal cortices. In NC, hippocampal metabolism predicted learning; entorhinal metabolism predicted recognition; and hippocampal metabolism predicted recall. In MCI, thickness of the entorhinal and precuneus cortices predicted learning, while parahippocampal metabolism predicted recognition. In AD, posterior cingulate cortical thickness predicted learning, while APOE genotype predicted recognition. In the total sample, hippocampal volume and metabolism, cortical thickness of the precuneus, and inferior parietal metabolism predicted learning; hippocampal volume and metabolism, parahippocampal thickness and APOE genotype predicted recognition. Imaging methods appear complementary and differentially sensitive to memory in health and disease. Medial temporal and parietal metabolism and morphometry best explained memory variance. Medial temporal characteristics were related to learning, recall and recognition, while parietal structures only predicted learning. Copyright 2008. Published by Elsevier Inc.

  19. The Sound of Social Cognition: Toddlers' Understanding of How Sound Influences Others

    ERIC Educational Resources Information Center

    Williamson, Rebecca A.; Brooks, Rechele; Meltzoff, Andrew N.

    2015-01-01

    Understanding others' perceptions is a fundamental aspect of social cognition. Children's construal of visual perception is well investigated, but there is little work on children's understanding of others' auditory perception. The current study assesses toddlers' recognition that producing different sounds can affect others…

  20. The Development of an Adolescent Perception of Being Known Measure

    ERIC Educational Resources Information Center

    Wallace, Tanner LeBaron; Ye, Feifei; McHugh, Rebecca; Chhuon, Vichet

    2012-01-01

    Adopting a constructivist perspective of adolescent development, we argue adolescents' perceptions of "being known" reflect teachers' authentic recognition of adolescents' multiple emerging identities. As such, adolescent perceptions of being known are a distinct factor associated with high school students' engagement in school. A mixed…

  1. The Interplay of Perceptions of the Learning Environment, Personality and Learning Strategies: A Study amongst International Business Studies Students

    ERIC Educational Resources Information Center

    Nijhuis, Jan; Segers, Mien; Gijselaers, Wim

    2007-01-01

    Previous research on students' learning strategies has examined the relationships between either perceptions of the learning environment or personality and learning strategies. The focus of this study was on the joint relationships between the students' perceptions of the learning environment, their personality, and the learning strategies they…

  2. Time-Warp–Invariant Neuronal Processing

    PubMed Central

    Gütig, Robert; Sompolinsky, Haim

    2009-01-01

    Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities. PMID:19582146

  3. Infrared vehicle recognition using unsupervised feature learning based on K-feature

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Tan, Yihua; Xia, Haijiao; Tian, Jinwen

    2018-02-01

    Subject to the complex battlefield environment, it is difficult to establish a complete knowledge base in practical application of vehicle recognition algorithms. The infrared vehicle recognition is always difficult and challenging, which plays an important role in remote sensing. In this paper we propose a new unsupervised feature learning method based on K-feature to recognize vehicle in infrared images. First, we use the target detection algorithm which is based on the saliency to detect the initial image. Then, the unsupervised feature learning based on K-feature, which is generated by Kmeans clustering algorithm that extracted features by learning a visual dictionary from a large number of samples without label, is calculated to suppress the false alarm and improve the accuracy. Finally, the vehicle target recognition image is finished by some post-processing. Large numbers of experiments demonstrate that the proposed method has satisfy recognition effectiveness and robustness for vehicle recognition in infrared images under complex backgrounds, and it also improve the reliability of it.

  4. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  5. Learning and Recognition of a Non-conscious Sequence of Events in Human Primary Visual Cortex.

    PubMed

    Rosenthal, Clive R; Andrews, Samantha K; Antoniades, Chrystalina A; Kennard, Christopher; Soto, David

    2016-03-21

    Human primary visual cortex (V1) has long been associated with learning simple low-level visual discriminations [1] and is classically considered outside of neural systems that support high-level cognitive behavior in contexts that differ from the original conditions of learning, such as recognition memory [2, 3]. Here, we used a novel fMRI-based dichoptic masking protocol-designed to induce activity in V1, without modulation from visual awareness-to test whether human V1 is implicated in human observers rapidly learning and then later (15-20 min) recognizing a non-conscious and complex (second-order) visuospatial sequence. Learning was associated with a change in V1 activity, as part of a temporo-occipital and basal ganglia network, which is at variance with the cortico-cerebellar network identified in prior studies of "implicit" sequence learning that involved motor responses and visible stimuli (e.g., [4]). Recognition memory was associated with V1 activity, as part of a temporo-occipital network involving the hippocampus, under conditions that were not imputable to mechanisms associated with conscious retrieval. Notably, the V1 responses during learning and recognition separately predicted non-conscious recognition memory, and functional coupling between V1 and the hippocampus was enhanced for old retrieval cues. The results provide a basis for novel hypotheses about the signals that can drive recognition memory, because these data (1) identify human V1 with a memory network that can code complex associative serial visuospatial information and support later non-conscious recognition memory-guided behavior (cf. [5]) and (2) align with mouse models of experience-dependent V1 plasticity in learning and memory [6]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Mutual interference between statistical summary perception and statistical learning.

    PubMed

    Zhao, Jiaying; Ngo, Nhi; McKendrick, Ryan; Turk-Browne, Nicholas B

    2011-09-01

    The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.

  7. An algorithm of improving speech emotional perception for hearing aid

    NASA Astrophysics Data System (ADS)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  8. Voice gender discrimination provides a measure of more than pitch-related perception in cochlear implant users

    PubMed Central

    Li, Tianhao; Fu, Qian-Jie

    2013-01-01

    Objectives (1) To investigate whether voice gender discrimination (VGD) could be a useful indicator of the spectral and temporal processing abilities of individual cochlear implant (CI) users; (2) To examine the relationship between VGD and speech recognition with CI when comparable acoustic cues are used for both perception processes. Design VGD was measured using two talker sets with different inter-gender fundamental frequencies (F0), as well as different acoustic CI simulations. Vowel and consonant recognition in quiet and noise were also measured and compared with VGD performance. Study sample Eleven postlingually deaf CI users. Results The results showed that (1) mean VGD performance differed for different stimulus sets, (2) VGD and speech recognition performance varied among individual CI users, and (3) individual VGD performance was significantly correlated with speech recognition performance under certain conditions. Conclusions VGD measured with selected stimulus sets might be useful for assessing not only pitch-related perception, but also spectral and temporal processing by individual CI users. In addition to improvements in spectral resolution and modulation detection, the improvement in higher modulation frequency discrimination might be particularly important for CI users in noisy environments. PMID:21696330

  9. English vowel learning by speakers of Mandarin

    NASA Astrophysics Data System (ADS)

    Thomson, Ron I.

    2005-04-01

    One of the most influential models of second language (L2) speech perception and production [Flege, Speech Perception and Linguistic Experience (York, Baltimore, 1995) pp. 233-277] argues that during initial stages of L2 acquisition, perceptual categories sharing the same or nearly the same acoustic space as first language (L1) categories will be processed as members of that L1 category. Previous research has generally been limited to testing these claims on binary L2 contrasts, rather than larger portions of the perceptual space. This study examines the development of 10 English vowel categories by 20 Mandarin L1 learners of English. Imitation of English vowel stimuli by these learners, at 6 data collection points over the course of one year, were recorded. Using a statistical pattern recognition model, these productions were then assessed against native speaker norms. The degree to which the learners' perception/production shifted toward the target English vowels and the degree to which they matched L1 categories in ways predicted by theoretical models are discussed. The results of this experiment suggest that previous claims about perceptual assimilation of L2 categories to L1 categories may be too strong.

  10. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  11. The planum temporale as a computational hub.

    PubMed

    Griffiths, Timothy D; Warren, Jason D

    2002-07-01

    It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it.

  12. Depression training in nursing homes: lessons learned from a pilot study.

    PubMed

    Smith, Marianne; Stolder, Mary Ellen; Jaggers, Benjamin; Liu, Megan Fang; Haedtke, Chris

    2013-02-01

    Late-life depression is common among nursing home residents, but often is not addressed by nurses. Using a self-directed CD-based depression training program, this pilot study used mixed methods to assess feasibility issues, determine nurse perceptions of training, and evaluate depression-related outcomes among residents in usual care and training conditions. Of 58 nurses enrolled, 24 completed the training and gave it high ratings. Outcomes for 50 residents include statistically significant reductions in depression severity over time (p < 0.001) among all groups. Depression training is an important vehicle to improve depression recognition and daily nursing care, but diverse factors must be addressed to assure optimal outcomes.

  13. Depression Training in Nursing Homes: Lessons Learned from a Pilot Study

    PubMed Central

    Smith, Marianne; Stolder, Mary Ellen; Jaggers, Benjamin; Liu, Megan; Haedke, Chris

    2014-01-01

    Late-life depression is common among nursing home residents, but often is not addressed by nurses. Using a self-directed, CD-based depression training program, this pilot study used mixed methods to assess feasibility issues, determine nurse perceptions of training, and evaluate depression-related outcomes among residents in usual care and training conditions. Of 58 nurses enrolled, 24 completed the training and gave it high ratings. Outcomes for 50 residents include statistically significant reductions in depression severity over time (p<0.001) among all groups. Depression training is an important vehicle to improve depression recognition and daily nursing care, but diverse factors must be addressed to assure optimal outcomes. PMID:23369120

  14. Voice input/output capabilities at Perception Technology Corporation

    NASA Technical Reports Server (NTRS)

    Ferber, Leon A.

    1977-01-01

    Condensed resumes of key company personnel at the Perception Technology Corporation are presented. The staff possesses recognition, speech synthesis, speaker authentication, and language identification. Hardware and software engineers' capabilities are included.

  15. Linking Livelihoods and Conservation: An Examination of Local Residents' Perceived Linkages Between Conservation and Livelihood Benefits Around Nepal's Chitwan National Park

    NASA Astrophysics Data System (ADS)

    Nepal, Sanjay; Spiteri, Arian

    2011-05-01

    This paper investigates local recognition of the link between incentive-based program (IBP) benefits and conservation, and how perceptions of benefits and linkage influence attitudes in communities surrounding Chitwan National Park, Nepal. A survey of 189 households conducted between October and December 2004 examined local residents' perceived benefits, their attitudes toward park management, and perception of linkages between conservation and livelihoods. Linkage perceptions were measured by a scale compared with a respondent's recognition of benefits to determine whether IBPs establish a connection between benefits and livelihoods. An attitude scale was also created to compare attitudes toward park management with perceptions of benefits and linkage to determine if IBPs led to positive attitudes, and if the recognition of a direct tie between livelihoods and natural resources made attitudes more favorable. Research results indicate that as acknowledgement of benefit increases, so does the perception of linkage between the resource and livelihoods. Similarly, when perceived benefit increases, so too does attitude towards management. Positive attitude towards park management is influenced more by perception of livelihood dependence on resources than on benefits received from the park. However, overwhelming positive support voiced for conservation did not coincide with conduct. In spite of the positive attitudes and high perception of linkage, people did not necessarily behave in a way compatible with conservation. This suggests that while benefits alone can lead to positive attitudes, without clear linkages to conservation, the IBP may lose persuasion when alternative options—conflicting with conservation objectives—arise promising to provide greater economic benefit.

  16. Mispronunciation Detection for Language Learning and Speech Recognition Adaptation

    ERIC Educational Resources Information Center

    Ge, Zhenhao

    2013-01-01

    The areas of "mispronunciation detection" (or "accent detection" more specifically) within the speech recognition community are receiving increased attention now. Two application areas, namely language learning and speech recognition adaptation, are largely driving this research interest and are the focal points of this work.…

  17. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    PubMed

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.

  18. Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks

    ERIC Educational Resources Information Center

    Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya

    2016-01-01

    This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…

  19. Children with a cochlear implant: characteristics and determinants of speech recognition, speech-recognition growth rate, and speech production.

    PubMed

    Wie, Ona Bø; Falkenberg, Eva-Signe; Tvete, Ole; Tomblin, Bruce

    2007-05-01

    The objectives of the study were to describe the characteristics of the first 79 prelingually deaf cochlear implant users in Norway and to investigate to what degree the variation in speech recognition, speech- recognition growth rate, and speech production could be explained by the characteristics of the child, the cochlear implant, the family, and the educational setting. Data gathered longitudinally were analysed using descriptive statistics, multiple regression, and growth-curve analysis. The results show that more than 50% of the variation could be explained by these characteristics. Daily user-time, non-verbal intelligence, mode of communication, length of CI experience, and educational placement had the highest effect on the outcome. The results also indicate that children educated in a bilingual approach to education have better speech perception and faster speech perception growth rate with increased focus on spoken language.

  20. Local visual perception bias in children with high-functioning autism spectrum disorders; do we have the whole picture?

    PubMed

    Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn

    2016-01-01

    While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.

  1. Adaptive Learning and Pruning Using Periodic Packet for Fast Invariance Extraction and Recognition

    NASA Astrophysics Data System (ADS)

    Chang, Sheng-Jiang; Zhang, Bian-Li; Lin, Lie; Xiong, Tao; Shen, Jin-Yuan

    2005-02-01

    A new learning scheme using a periodic packet as the neuronal activation function is proposed for invariance extraction and recognition of handwritten digits. Simulation results show that the proposed network can extract the invariant feature effectively and improve both the convergence and the recognition rate.

  2. The Role of Tone Height, Melodic Contour, and Tone Chroma in Melody Recognition.

    ERIC Educational Resources Information Center

    Massaro, Dominic W.; And Others

    1980-01-01

    Relationships among tone height, melodic contour, tone chroma, and recognition of recently learned melodies were investigated. Results replicated previous studies using familiar folk songs, providing evidence that melodic contour, tone chroma, and tone height contribute to recognition of both highly familiar and recently learned melodies.…

  3. Students' perception of the learning environment in a distributed medical programme.

    PubMed

    Veerapen, Kiran; McAleer, Sean

    2010-09-24

    The learning environment of a medical school has a significant impact on students' achievements and learning outcomes. The importance of equitable learning environments across programme sites is implicit in distributed undergraduate medical programmes being developed and implemented. To study the learning environment and its equity across two classes and three geographically separate sites of a distributed medical programme at the University of British Columbia Medical School that commenced in 2004. The validated Dundee Ready Educational Environment Survey was sent to all students in their 2nd and 3rd year (classes graduating in 2009 and 2008) of the programme. The domains of the learning environment surveyed were: students' perceptions of learning, students' perceptions of teachers, students' academic self-perceptions, students' perceptions of the atmosphere, and students' social self-perceptions. Mean scores, frequency distribution of responses, and inter- and intrasite differences were calculated. The perception of the global learning environment at all sites was more positive than negative. It was characterised by a strongly positive perception of teachers. The work load and emphasis on factual learning were perceived negatively. Intersite differences within domains of the learning environment were more evident in the pioneer class (2008) of the programme. Intersite differences consistent across classes were largely related to on-site support for students. Shared strengths and weaknesses in the learning environment at UBC sites were evident in areas that were managed by the parent institution, such as the attributes of shared faculty and curriculum. A greater divergence in the perception of the learning environment was found in domains dependent on local arrangements and social factors that are less amenable to central regulation. This study underlines the need for ongoing comparative evaluation of the learning environment at the distributed sites and interaction between leaders of these sites.

  4. Recognition of Prior Learning: The Participants' Perspective

    ERIC Educational Resources Information Center

    Miguel, Marta C.; Ornelas, José H.; Maroco, João P.

    2016-01-01

    The current narrative on lifelong learning goes beyond formal education and training, including learning at work, in the family and in the community. Recognition of prior learning is a process of evaluation of those skills and knowledge acquired through life experience, allowing them to be formally recognized by the qualification systems. It is a…

  5. Transformative Learning: Immigrant Learners Who Participated in Recognition of Acquired Competencies (RAC)

    ERIC Educational Resources Information Center

    Moss, Leah; Brown, Andy

    2014-01-01

    Recognition of Acquired Competencies (RAC) as it is known in Quebec, Canada, or Prior Learning Assessment (PLA), requires a learner to engage in retrospective thought about their learning path, their learning style and their experiential knowledge. This process of critical self-reflection and rigorous analysis by the learner of their prior…

  6. Bidirectional Modulation of Recognition Memory

    PubMed Central

    Ho, Jonathan W.; Poeta, Devon L.; Jacobson, Tara K.; Zolnik, Timothy A.; Neske, Garrett T.; Connors, Barry W.

    2015-01-01

    Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30–40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30–40 Hz was not effective in increasing exploration of novel images. Stimulation at 10–15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally. SIGNIFICANCE STATEMENT Recognition of novelty and familiarity are important for learning, memory, and decision making. Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects, but how novelty and familiarity are encoded and transmitted in the brain is not known. Perirhinal neurons respond to novelty and familiarity by changing firing rates, but recent work suggests that brain oscillations may also be important for recognition. In this study, we showed that stimulation of the PER could increase or decrease exploration of novel and familiar images depending on the frequency of stimulation. Our findings suggest that optical stimulation of PER at specific frequencies can predictably alter recognition memory. PMID:26424881

  7. The effects of musical and linguistic components in recognition of real-world musical excerpts by cochlear implant recipients and normal-hearing adults.

    PubMed

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce

    2012-01-01

    Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).

  8. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    PubMed Central

    2014-01-01

    Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility. PMID:25276860

  9. A rodent model for the study of invariant visual object recognition

    PubMed Central

    Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.

    2009-01-01

    The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704

  10. Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution

    PubMed Central

    Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin

    2016-01-01

    The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114

  11. Effects of central nervous system residua on cochlear implant results in children deafened by meningitis.

    PubMed

    Francis, Howard W; Pulsifer, Margaret B; Chinnici, Jill; Nutt, Robert; Venick, Holly S; Yeagle, Jennifer D; Niparko, John K

    2004-05-01

    This study explored factors associated with speech recognition outcomes in postmeningitic deafness (PMD). The results of cochlear implantation may vary in children with PMD because of sequelae that extend beyond the auditory periphery. To determine which factors might be most determinative of outcome of cochlear implantation in children with PMD. Retrospective chart review. A referral center for pediatric cochlear implantation and rehabilitation. Thirty children with cochlear implants who were deafened by meningitis were matched with subjects who were deafened by other causes based on the age at diagnosis, age at cochlear implantation, age at which hearing aids were first used, and method of communication used at home or in the classroom. Speech perception performance within the first 2 years after cochlear implantation and its relationship with presurgical cognitive measures and medical history. There was no difference in the overall cognitive or postoperative speech perception performance between the children with PMD and those deafened by other causes. The presence of postmeningitic hydrocephalus, however, posed greater challenges to the rehabilitation process, as indicated by significantly smaller gains in speech perception and a predilection for behavioral problems. By comparison, cochlear scarring and incomplete electrode insertion had no impact on speech perception results. Although the results demonstrated no significant delay in cognitive or speech perception performance in the PMD group, central nervous system residua, when present, can impede the acquisition of speech perception with a cochlear implant. Central effects associated with PMD may thus impact language learning potential; cognitive and behavioral therapy should be considered in rehabilitative planning and in establishing expectations of outcome.

  12. A Bayesian generative model for learning semantic hierarchies

    PubMed Central

    Mittelman, Roni; Sun, Min; Kuipers, Benjamin; Savarese, Silvio

    2014-01-01

    Building fine-grained visual recognition systems that are capable of recognizing tens of thousands of categories, has received much attention in recent years. The well known semantic hierarchical structure of categories and concepts, has been shown to provide a key prior which allows for optimal predictions. The hierarchical organization of various domains and concepts has been subject to extensive research, and led to the development of the WordNet domains hierarchy (Fellbaum, 1998), which was also used to organize the images in the ImageNet (Deng et al., 2009) dataset, in which the category count approaches the human capacity. Still, for the human visual system, the form of the hierarchy must be discovered with minimal use of supervision or innate knowledge. In this work, we propose a new Bayesian generative model for learning such domain hierarchies, based on semantic input. Our model is motivated by the super-subordinate organization of domain labels and concepts that characterizes WordNet, and accounts for several important challenges: maintaining context information when progressing deeper into the hierarchy, learning a coherent semantic concept for each node, and modeling uncertainty in the perception process. PMID:24904452

  13. A nonmusician with severe Alzheimer's dementia learns a new song.

    PubMed

    Baird, Amee; Umbach, Heidi; Thompson, William Forde

    2017-02-01

    The hallmark symptom of Alzheimer's Dementia (AD) is impaired memory, but memory for familiar music can be preserved. We explored whether a non-musician with severe AD could learn a new song. A 91 year old woman (NC) with severe AD was taught an unfamiliar song. We assessed her delayed song recall (24 hours and 2 weeks), music cognition, two word recall (presented within a familiar song lyric, a famous proverb, or as a word stem completion task), and lyrics and proverb completion. NC's music cognition (pitch and rhythm perception, recognition of familiar music, completion of lyrics) was relatively preserved. She recalled 0/2 words presented in song lyrics or proverbs, but 2/2 word stems, suggesting intact implicit memory function. She could sing along to the newly learnt song on immediate and delayed recall (24 hours and 2 weeks later), and with intermittent prompting could sing it alone. This is the first detailed study of preserved ability to learn a new song in a non-musician with severe AD, and contributes to observations of relatively preserved musical abilities in people with dementia.

  14. Young children make their gestural communication systems more language-like: segmentation and linearization of semantic elements in motion events.

    PubMed

    Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro

    2014-08-01

    Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.

  15. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  16. Judgments of Learning are Influenced by Multiple Cues In Addition to Memory for Past Test Accuracy.

    PubMed

    Hertzog, Christopher; Hines, Jarrod C; Touron, Dayna R

    When people try to learn new information (e.g., in a school setting), they often have multiple opportunities to study the material. One of the most important things to know is whether people adjust their study behavior on the basis of past success so as to increase their overall level of learning (for example, by emphasizing information they have not yet learned). Monitoring their learning is a key part of being able to make those kinds of adjustments. We used a recognition memory task to replicate prior research showing that memory for past test outcomes influences later monitoring, as measured by judgments of learning (JOLs; confidence that the material has been learned), but also to show that subjective confidence in whether the test answer and the amount of time taken to restudy the items also have independent effects on JOLs. We also show that there are individual differences in the effects of test accuracy and test confidence on JOLs, showing that some but not all people use past test experiences to guide monitoring of their new learning. Monitoring learning is therefore a complex process of considering multiple cues, and some people attend to those cues more effectively than others. Improving the quality of monitoring performance and learning could lead to better study behaviors and better learning. An individual's memory of past test performance (MPT) is often cited as the primary cue for judgments of learning (JOLs) following test experience during multi-trial learning tasks (Finn & Metcalfe, 2007; 2008). We used an associative recognition task to evaluate MPT-related phenomena, because performance monitoring, as measured by recognition test confidence judgments (CJs), is fallible and varies in accuracy across persons. The current study used multilevel regression models to show the simultaneous and independent influences of multiple cues on Trial 2 JOLs, in addition to performance accuracy (the typical measure of MPT in cued-recall experiments). These cues include recognition CJs, perceived recognition fluency, and Trial 2 study time allocation (an index of reprocessing fluency). Our results expand the scope of MPT-related phenomena in recognition memory testing to show independent effects of recognition test accuracy and CJs on second-trial JOLs, while also demonstrating individual differences in the effects of these cues on JOLs (as manifested in significant random effects for those regression effects in the model). The effect of study time on second-trial JOLs controlling on other variables, including Trial 1 recognition memory accuracy, also demonstrates that second-trial encoding behavior influence JOLs in addition to MPT.

  17. Bilateral Theta-Burst TMS to Influence Global Gestalt Perception

    PubMed Central

    Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto

    2012-01-01

    While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres. PMID:23110106

  18. Bilateral theta-burst TMS to influence global gestalt perception.

    PubMed

    Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto

    2012-01-01

    While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects - a deficit termed simultanagnosia - greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.

  19. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    PubMed

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology. (c) 2015 APA, all rights reserved).

  20. Media Matter: The Effect of Medium of Presentation on Student's Recognition of Histopathology.

    PubMed

    Telang, Ajay; Jong, Nynke De; Dalen, Jan Van

    2016-12-01

    Pathology teaching has undergone transformation with the introduction of virtual microscopy as a teaching and learning tool. To assess if dental students can identify histopathology irrespective of the media of presentation and if the media affect student's oral pathology case based learning scores. The perception of students towards "hybrid" approach in teaching and learning histopathology in oral pathology was also assessed. A controlled experiment was conduc-ted on year 4 and year 5 dental student groups using a perfor-mance test and a questionnaire survey. A response rate of 81% was noted for the performance test as well as the questionnaire survey. Results show a significant effect of media on performance of students with virtual microscopy bringing out the best performance across all student groups in case based learning scenarios. The order of preference for media was found to be virtual microscopy followed by photomicrographs and light microscopy. However, 94% of students still prefer the present hybrid system for teaching and learning of oral pathology. The study shows that identification of histo-pathology by students is dependent on media and the type of media has a significant effect on the performance. Virtual microscopy is strongly perceived as a useful tool for learning which thus brings out the best performance, however; the hybrid approach still remains the most preferred approach for histopathology learning.

  1. Effects of congruence between preferred and perceived learning environments in nursing education in Taiwan: a cross-sectional study

    PubMed Central

    Yeh, Ting-Kuang; Huang, Hsiu-Mei; Chan, Wing P; Chang, Chun-Yen

    2016-01-01

    Objective To investigate the effects of congruence between preferred and perceived learning environments on learning outcomes of nursing students. Setting A nursing course at a university in central Taiwan. Participants 124 Taiwanese nursing students enrolled in a 13-week problem-based Fundamental Nursing curriculum. Design and methods Students' preferred learning environment, perceptions about the learning environment and learning outcomes (knowledge, self-efficacy and attitudes) were assessed. On the basis of test scores measuring their preferred and perceived learning environments, students were assigned to one of two groups: a ‘preferred environment aligned with perceived learning environment’ group and a ‘preferred environment discordant with perceived learning environment’ group. Learning outcomes were analysed by group. Outcome measures Most participants preferred learning in a classroom environment that combined problem-based and lecture-based instruction. However, a mismatch of problem-based instruction with students' perceptions occurred. Learning outcomes were significantly better when students' perceptions of their instructional activities were congruent with their preferred learning environment. Conclusions As problem-based learning becomes a focus of educational reform in nursing, teachers need to be aware of students' preferences and perceptions of the learning environment. Teachers may also need to improve the match between an individual student's perception and a teacher's intention in the learning environment, and between the student's preferred and actual perceptions of the learning environment. PMID:27207620

  2. Joint object and action recognition via fusion of partially observable surveillance imagery data

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Chan, Alex L.

    2017-05-01

    Partially observable group activities (POGA) occurring in confined spaces are epitomized by their limited observability of the objects and actions involved. In many POGA scenarios, different objects are being used by human operators for the conduct of various operations. In this paper, we describe the ontology of such as POGA in the context of In-Vehicle Group Activity (IVGA) recognition. Initially, we describe the virtue of ontology modeling in the context of IVGA and show how such an ontology and a priori knowledge about the classes of in-vehicle activities can be fused for inference of human actions that consequentially leads to understanding of human activity inside the confined space of a vehicle. In this paper, we treat the problem of "action-object" as a duality problem. We postulate a correlation between observed human actions and the object that is being utilized within those actions, and conversely, if an object being handled is recognized, we may be able to expect a number of actions that are likely to be performed on that object. In this study, we use partially observable human postural sequences to recognition actions. Inspired by convolutional neural networks (CNNs) learning capability, we present an architecture design using a new CNN model to learn "action-object" perception from surveillance videos. In this study, we apply a sequential Deep Hidden Markov Model (DHMM) as a post-processor to CNN to decode realized observations into recognized actions and activities. To generate the needed imagery data set for the training and testing of these new methods, we use the IRIS virtual simulation software to generate high-fidelity and dynamic animated scenarios that depict in-vehicle group activities under different operational contexts. The results of our comparative investigation are discussed and presented in detail.

  3. Differences between Students' and Teachers' Perceptions of Education: Profiles to Describe Congruence and Friction

    ERIC Educational Resources Information Center

    Könings, Karen D.; Seidel, Tina; Brand-Gruwel, Saskia; Merriënboer, Jeroen J. G.

    2014-01-01

    Teachers and students have their own perceptions of education. Congruent perceptions contribute to optimal teaching-learning processes and help achieving best learning outcomes. This study investigated patterns in differences between students' and teachers' perceptions of their learning environment. Student profiles were identified taking into…

  4. The Recognition of Prior Learning. Quality Assurance in Education and Training.

    ERIC Educational Resources Information Center

    New Zealand Qualifications Authority, Wellington.

    As this booklet describes, New Zealand's Education Amendment Act of 1990 made the country's Qualifications Authority (QA) responsible for developing and implementing a process for recognition of prior learning (RPL) that would enable individuals to receive formal recognition for skills and knowledge they already possess. As of 1993, the QA had…

  5. Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.

    PubMed

    Zhou, Pan; Lin, Zhouchen; Zhang, Chao

    2016-05-01

    Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method.

  6. Embodied learning of a generative neural model for biological motion perception and inference

    PubMed Central

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215

  7. Embodied learning of a generative neural model for biological motion perception and inference.

    PubMed

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  8. Early prediction of student goals and affect in narrative-centered learning environments

    NASA Astrophysics Data System (ADS)

    Lee, Sunyoung

    Recent years have seen a growing recognition of the role of goal and affect recognition in intelligent tutoring systems. Goal recognition is the task of inferring users' goals from a sequence of observations of their actions. Because of the uncertainty inherent in every facet of human computer interaction, goal recognition is challenging, particularly in contexts in which users can perform many actions in any order, as is the case with intelligent tutoring systems. Affect recognition is the task of identifying the emotional state of a user from a variety of physical cues, which are produced in response to affective changes in the individual. Accurately recognizing student goals and affect states could contribute to more effective and motivating interactions in intelligent tutoring systems. By exploiting knowledge of student goals and affect states, intelligent tutoring systems can dynamically modify their behavior to better support individual students. To create effective interactions in intelligent tutoring systems, goal and affect recognition models should satisfy two key requirements. First, because incorrectly predicted goals and affect states could significantly diminish the effectiveness of interactive systems, goal and affect recognition models should provide accurate predictions of user goals and affect states. When observations of users' activities become available, recognizers should make accurate early" predictions. Second, goal and affect recognition models should be highly efficient so they can operate in real time. To address key issues, we present an inductive approach to recognizing student goals and affect states in intelligent tutoring systems by learning goals and affect recognition models. Our work focuses on goal and affect recognition in an important new class of intelligent tutoring systems, narrative-centered learning environments. We report the results of empirical studies of induced recognition models from observations of students' interactions in narrative-centered learning environments. Experimental results suggest that induced models can make accurate early predictions of student goals and affect states, and they are sufficiently efficient to meet the real-time performance requirements of interactive learning environments.

  9. The Development of the Orthographic Consistency Effect in Speech Recognition: From Sublexical to Lexical Involvement

    ERIC Educational Resources Information Center

    Ventura, Paulo; Morais, Jose; Kolinsky, Regine

    2007-01-01

    The influence of orthography on children's on-line auditory word recognition was studied from the end of Grade 2 to the end of Grade 4, by examining the orthographic consistency effect [Ziegler, J. C., & Ferrand, L. (1998). Orthography shapes the perception of speech: The consistency effect in auditory recognition. "Psychonomic Bulletin & Review",…

  10. The Affordance of Speech Recognition Technology for EFL Learning in an Elementary School Setting

    ERIC Educational Resources Information Center

    Liaw, Meei-Ling

    2014-01-01

    This study examined the use of speech recognition (SR) technology to support a group of elementary school children's learning of English as a foreign language (EFL). SR technology has been used in various language learning contexts. Its application to EFL teaching and learning is still relatively recent, but a solid understanding of its…

  11. Let the Doors of Learning Be Open to All--A Case for Recognition of Prior Learning

    ERIC Educational Resources Information Center

    Singh, A. M.

    2011-01-01

    Recognition of Prior Learning (RPL) is a process of evaluating an adult learners previous experience, skills, knowledge and informal learning and articulating it towards a formal qualification. Whilst RPL is enshrined in a number of international qualifications frameworks, there are certain barriers which have prevented its application and…

  12. How Category Structure Influences the Perception of Object Similarity: The Atypicality Bias

    PubMed Central

    Tanaka, James William; Kantner, Justin; Bartlett, Marni

    2011-01-01

    Why do some faces appear more similar than others? Beyond structural factors, we speculate that similarity is governed by the organization of faces located in a multi-dimensional face space. To test this hypothesis, we morphed a typical face with an atypical face. If similarity judgments are guided purely by their physical properties, the morph should be perceived to be equally similar to its typical parent as its atypical parent. However, contrary to the structural prediction, our results showed that the morph face was perceived to be more similar to the atypical face than the typical face. Our empirical studies show that the atypicality bias is not limited to faces, but extends to other object categories (birds) whose members share common shape properties. We also demonstrate atypicality bias is malleable and can change subject to category learning and experience. Collectively, the empirical evidence indicates that perceptions of face and object similarity are affected by the distribution of stimuli in a face or object space. In this framework, atypical stimuli are located in a sparser region of the space where there is less competition for recognition and therefore, these representations capture a broader range of inputs. In contrast, typical stimuli are located in a denser region of category space where there is increased competition for recognition and hence, these representation draw a more restricted range of face inputs. These results suggest that the perceived likeness of an object is influenced by the organization of surrounding exemplars in the category space. PMID:22685441

  13. Image dependency in the recognition of newly learnt faces.

    PubMed

    Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily

    2017-05-01

    Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.

  14. Presentations of Shape in Object Recognition and Long-Term Visual Memory

    DTIC Science & Technology

    1994-04-05

    theory of human image understanding . Psychological Review, 94, 115-147. Biederman, I., & Gerhardstein, P. C. (1993). Recognizing depth-rotated...Kybemetik. Submitted to Journal of Experimental Psychology: Human Perception and Performance. REFERENCES Biederman, I. (1987). Recognition-by-components: A

  15. Effects of cross-language voice training on speech perception: Whose familiar voices are more intelligible?

    PubMed Central

    Levi, Susannah V.; Winters, Stephen J.; Pisoni, David B.

    2011-01-01

    Previous research has shown that familiarity with a talker’s voice can improve linguistic processing (herein, “Familiar Talker Advantage”), but this benefit is constrained by the context in which the talker’s voice is familiar. The current study examined how familiarity affects intelligibility by manipulating the type of talker information available to listeners. One group of listeners learned to identify bilingual talkers’ voices from English words, where they learned language-specific talker information. A second group of listeners learned the same talkers from German words, and thus only learned language-independent talker information. After voice training, both groups of listeners completed a word recognition task with English words produced by both familiar and unfamiliar talkers. Results revealed that English-trained listeners perceived more phonemes correct for familiar than unfamiliar talkers, while German-trained listeners did not show improved intelligibility for familiar talkers. The absence of a processing advantage in speech intelligibility for the German-trained listeners demonstrates limitations on the Familiar Talker Advantage, which crucially depends on the language context in which the talkers’ voices were learned; knowledge of how a talker produces linguistically relevant contrasts in a particular language is necessary to increase speech intelligibility for words produced by familiar talkers. PMID:22225059

  16. [Perception features of emotional intonation of short pseudowords].

    PubMed

    Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M

    2012-01-01

    Reaction time and recognition accuracy of speech emotional intonations in short meaningless words that differed only in one phoneme with background noise and without it were studied in 49 adults of 20-79 years old. The results were compared with the same parameters of emotional intonations in intelligent speech utterances under similar conditions. Perception of emotional intonations at different linguistic levels (phonological and lexico-semantic) was found to have both common features and certain peculiarities. Recognition characteristics of emotional intonations depending on gender and age of listeners appeared to be invariant with regard to linguistic levels of speech stimuli. Phonemic composition of pseudowords was found to influence the emotional perception, especially against the background noise. The most significant stimuli acoustic characteristic responsible for the perception of speech emotional prosody in short meaningless words under the two experimental conditions, i.e. with and without background noise, was the fundamental frequency variation.

  17. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  18. Towards Multimodal Emotion Recognition in E-Learning Environments

    ERIC Educational Resources Information Center

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and…

  19. Recognition of emotion from body language among patients with unipolar depression

    PubMed Central

    Loi, Felice; Vaidya, Jatin G.; Paradiso, Sergio

    2013-01-01

    Major depression may be associated with abnormal perception of emotions and impairment in social adaptation. Emotion recognition from body language and its possible implications to social adjustment have not been examined in patients with depression. Three groups of participants (51 with depression; 68 with history of depression in remission; and 69 never depressed healthy volunteers) were compared on static and dynamic tasks of emotion recognition from body language. Psychosocial adjustment was assessed using the Social Adjustment Scale Self-Report (SAS-SR). Participants with current depression showed reduced recognition accuracy for happy stimuli across tasks relative to remission and comparison participants. Participants with depression tended to show poorer psychosocial adaptation relative to remission and comparison groups. Correlations between perception accuracy of happiness and scores on the SAS-SR were largely not significant. These results indicate that depression is associated with reduced ability to appraise positive stimuli of emotional body language but emotion recognition performance is not tied to social adjustment. These alterations do not appear to be present in participants in remission suggesting state-like qualities. PMID:23608159

  20. Focusing Teaching on Students: Examining Student Perceptions of Learning Strategies

    ERIC Educational Resources Information Center

    Lumpkin, Angela; Achen, Rebecca; Dodd, Regan

    2015-01-01

    This study examined undergraduate and graduate students' perceptions of the impact of in-class learning activities, out-of-class learning activities, and instructional materials on their learning. Using survey methodology, students anonymously assessed their perceptions of in-class activities, out-of-class activities, and instructional materials…

  1. Longing for existential recognition: a qualitative study of everyday concerns for people with somatoform disorders.

    PubMed

    Lind, Annemette Bondo; Risoer, Mette Bech; Nielsen, Klaus; Delmar, Charlotte; Christensen, Morten Bondo; Lomborg, Kirsten

    2014-02-01

    Patients with somatoform disorders could be vulnerable to stressors and have difficulties coping with stress. The aim was to explore what the patients experience as stressful and how they resolve stress in everyday life. A cross-sectional retrospective design using 24 semi-structured individual life history interviews. Data-analysis was based on grounded theory. A major concern in patients was a longing for existential recognition. This influenced the patients' self-confidence, stress appraisals, symptom perceptions, and coping attitudes. Generally, patients had difficulties with self-confidence and self-recognition of bodily sensations, feelings, vulnerability, and needs, which negatively framed their attempts to obtain recognition in social interactions. Experiences of recognition appeared in three different modalities: 1) "existential misrecognition" covered the experience of being met with distrust and disrespect, 2) "uncertain existential recognition" covered experiences of unclear communication and a perception of not being totally recognized, and 3) "successful existential recognition" covered experiences of total respect and understanding. "Misrecognition" and "uncertain recognition" related to decreased self-confidence, avoidant coping behaviours, increased stress, and symptom appraisal; whereas "successful recognition" related to higher self-confidence, active coping behaviours, decreased stress, and symptom appraisal. Different modalities of existential recognition influenced self-identity and social identity affecting patients' daily stress and symptom appraisals, self-confidence, self-recognition, and coping attitudes. Clinically it seems crucial to improve the patients' ability to communicate concerns, feelings, and needs in social interactions. Better communicative skills and more active coping could reduce the harm the patients experienced by not being recognized and increase the healing potential of successful recognition. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  3. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  4. Learning Models and Real-Time Speech Recognition.

    ERIC Educational Resources Information Center

    Danforth, Douglas G.; And Others

    This report describes the construction and testing of two "psychological" learning models for the purpose of computer recognition of human speech over the telephone. One of the two models was found to be superior in all tests. A regression analysis yielded a 92.3% recognition rate for 14 subjects ranging in age from 6 to 13 years. Tests…

  5. Auditory-motor learning influences auditory memory for music.

    PubMed

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  6. Human body perception and higher-level person perception are dissociated in early development.

    PubMed

    Slaughter, Virginia

    2011-01-01

    Abstract Developmental data support the proposal that human body perceptual processing is distinct from other aspects of person perception. Infants are sensitive to human bodily motion and attribute goals to human arm movements before they demonstrate recognition of human body structure. The developmental data suggest the possibility of bidirectional linkages between EBA- and FBA-mediated representations and these higher-level elements of person perception.

  7. Students' perception of the learning environment in a distributed medical programme

    PubMed Central

    Veerapen, Kiran; McAleer, Sean

    2010-01-01

    Background The learning environment of a medical school has a significant impact on students' achievements and learning outcomes. The importance of equitable learning environments across programme sites is implicit in distributed undergraduate medical programmes being developed and implemented. Purpose To study the learning environment and its equity across two classes and three geographically separate sites of a distributed medical programme at the University of British Columbia Medical School that commenced in 2004. Method The validated Dundee Ready Educational Environment Survey was sent to all students in their 2nd and 3rd year (classes graduating in 2009 and 2008) of the programme. The domains of the learning environment surveyed were: students' perceptions of learning, students' perceptions of teachers, students' academic self-perceptions, students' perceptions of the atmosphere, and students' social self-perceptions. Mean scores, frequency distribution of responses, and inter- and intrasite differences were calculated. Results The perception of the global learning environment at all sites was more positive than negative. It was characterised by a strongly positive perception of teachers. The work load and emphasis on factual learning were perceived negatively. Intersite differences within domains of the learning environment were more evident in the pioneer class (2008) of the programme. Intersite differences consistent across classes were largely related to on-site support for students. Conclusions Shared strengths and weaknesses in the learning environment at UBC sites were evident in areas that were managed by the parent institution, such as the attributes of shared faculty and curriculum. A greater divergence in the perception of the learning environment was found in domains dependent on local arrangements and social factors that are less amenable to central regulation. This study underlines the need for ongoing comparative evaluation of the learning environment at the distributed sites and interaction between leaders of these sites. PMID:20922033

  8. Adults' Self-Directed Learning of an Artificial Lexicon: The Dynamics of Neighborhood Reorganization

    ERIC Educational Resources Information Center

    Bardhan, Neil Prodeep

    2010-01-01

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three…

  9. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    PubMed

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  10. The Psychophysics of Algebra Expertise: Mathematics Perceptual Learning Interventions Produce Durable Encoding Changes

    ERIC Educational Resources Information Center

    Bufford, Carolyn A.; Mettler, Everett; Geller, Emma H.; Kellman, Philip J.

    2014-01-01

    Mathematics requires thinking but also pattern recognition. Recent research indicates that perceptual learning (PL) interventions facilitate discovery of structure and recognition of patterns in mathematical domains, as assessed by tests of mathematical competence. Here we sought direct evidence that a brief perceptual learning module (PLM)…

  11. Effects of congruence between preferred and perceived learning environments in nursing education in Taiwan: a cross-sectional study.

    PubMed

    Yeh, Ting-Kuang; Huang, Hsiu-Mei; Chan, Wing P; Chang, Chun-Yen

    2016-05-20

    To investigate the effects of congruence between preferred and perceived learning environments on learning outcomes of nursing students. A nursing course at a university in central Taiwan. 124 Taiwanese nursing students enrolled in a 13-week problem-based Fundamental Nursing curriculum. Students' preferred learning environment, perceptions about the learning environment and learning outcomes (knowledge, self-efficacy and attitudes) were assessed. On the basis of test scores measuring their preferred and perceived learning environments, students were assigned to one of two groups: a 'preferred environment aligned with perceived learning environment' group and a 'preferred environment discordant with perceived learning environment' group. Learning outcomes were analysed by group. Most participants preferred learning in a classroom environment that combined problem-based and lecture-based instruction. However, a mismatch of problem-based instruction with students' perceptions occurred. Learning outcomes were significantly better when students' perceptions of their instructional activities were congruent with their preferred learning environment. As problem-based learning becomes a focus of educational reform in nursing, teachers need to be aware of students' preferences and perceptions of the learning environment. Teachers may also need to improve the match between an individual student's perception and a teacher's intention in the learning environment, and between the student's preferred and actual perceptions of the learning environment. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    PubMed

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  13. EEG-based recognition of video-induced emotions: selecting subject-independent feature set.

    PubMed

    Kortelainen, Jukka; Seppänen, Tapio

    2013-01-01

    Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.

  14. Qualitative Analysis of Student Perceptions Comparing Team-based Learning and Traditional Lecture in a Pharmacotherapeutics Course.

    PubMed

    Remington, Tami L; Bleske, Barry E; Bartholomew, Tracy; Dorsch, Michael P; Guthrie, Sally K; Klein, Kristin C; Tingen, Jeffrey M; Wells, Trisha D

    2017-04-01

    Objective. To qualitatively compare students' attitudes and perceptions regarding team-based learning (TBL) and lecture. Design. Students were exposed to TBL and lecture in an elective pharmacotherapeutics course in a randomized, prospective, cross-over design. After completing the course, students provided their attitudes and perceptions through a written self-reflection and narrative questions on the end-of-course evaluation. Student responses were reviewed using a grounded theory coding method. Assessment. Students' responses yielded five major themes: impact of TBL on learning, perceptions about TBL learning methods, changes in approaches to learning, building skills for professional practice, and enduring challenges. Overall, students report TBL enhances their learning of course content (knowledge and application), teamwork skills, and lifelong learning skills. Conclusion. Students' attitudes and perceptions support TBL as a viable pedagogy for teaching pharmacotherapeutics.

  15. Qualitative Analysis of Student Perceptions Comparing Team-based Learning and Traditional Lecture in a Pharmacotherapeutics Course

    PubMed Central

    Bleske, Barry E.; Bartholomew, Tracy; Dorsch, Michael P.; Guthrie, Sally K.; Klein, Kristin C.; Tingen, Jeffrey M.; Wells, Trisha D.

    2017-01-01

    Objective. To qualitatively compare students’ attitudes and perceptions regarding team-based learning (TBL) and lecture. Design. Students were exposed to TBL and lecture in an elective pharmacotherapeutics course in a randomized, prospective, cross-over design. After completing the course, students provided their attitudes and perceptions through a written self-reflection and narrative questions on the end-of-course evaluation. Student responses were reviewed using a grounded theory coding method. Assessment. Students’ responses yielded five major themes: impact of TBL on learning, perceptions about TBL learning methods, changes in approaches to learning, building skills for professional practice, and enduring challenges. Overall, students report TBL enhances their learning of course content (knowledge and application), teamwork skills, and lifelong learning skills. Conclusion. Students’ attitudes and perceptions support TBL as a viable pedagogy for teaching pharmacotherapeutics. PMID:28496275

  16. Do the opportunities for learning and personal development lead to happiness? It depends on work-family conciliation.

    PubMed

    Rego, Arménio; Pina E Cunha, Miguel

    2009-07-01

    The study shows how the perceptions of opportunities for learning and personal development predict five dimensions of affective well-being (AWB: pleasure, comfort, placidity, enthusiasm, and vigor), and how this relationship is moderated by the perceptions of work-family conciliation. A sample comprising 404 individuals was collected. The findings show the following: (1) both the perceptions of opportunities for learning and personal development and perceptions of work-family conciliation predict AWB, the happier individuals being those who have high perceptions on both variables; (2) both variables interact in predicting AWB, in such a way that perceptions of high opportunities for learning and personal development may not lead to higher AWB if work-family conciliation is low. Post hoc analysis also suggests that the relationship between the perceptions of opportunities for learning and personal development and AWB tends to be nonlinear for individuals with perceptions of low work-family conciliation. (c) 2009 APA, all rights reserved.

  17. The Effects of Musical and Linguistic Components in Recognition of Real-World Musical Excerpts by Cochlear Implant Recipients and Normal-Hearing Adults

    PubMed Central

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce

    2011-01-01

    Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258

  18. Student Perceptions of Personalised Learning: Development and Validation of a Questionnaire with Regional Secondary Students

    ERIC Educational Resources Information Center

    Waldrip, Bruce; Cox, Peter; Deed, Craig; Dorman, Jeffrey; Edwards, Debra; Farrelly, Cathleen; Keeffe, Mary; Lovejoy, Valeria; Mow, Lucy; Prain, Vaughan; Sellings, Peter; Yager, Zali

    2014-01-01

    This project sought to evaluate regional students' perceptions of their readiness to learn, assessment processes, engagement, extent to which their learning is personalised and to relate these to academic efficacy, academic achievement, and student well-being. It also examined teachers' perceptions of students' readiness to learn, the assessment…

  19. Turkish High School Student's Perceptions of Learning Environment in Biology Classrooms and Their Attitudes toward Biology.

    ERIC Educational Resources Information Center

    Cakiroglu, Jale; Telli, Sibel; Cakiroglu, Erdinc

    The purpose of this study was to examine Turkish high school students' perceptions of learning environment in biology classrooms and to investigate relationships between learning environment and students' attitudes toward biology. Secondly, the study aimed to investigate the differences in students' perceptions of learning environments in biology…

  20. Regulating recognition decisions through incremental reinforcement learning.

    PubMed

    Han, Sanghoon; Dobbins, Ian G

    2009-06-01

    Does incremental reinforcement learning influence recognition memory judgments? We examined this question by subtly altering the relative validity or availability of feedback in order to differentially reinforce old or new recognition judgments. Experiment 1 probabilistically and incorrectly indicated that either misses or false alarms were correct in the context of feedback that was otherwise accurate. Experiment 2 selectively withheld feedback for either misses or false alarms in the context of feedback that was otherwise present. Both manipulations caused prominent shifts of recognition memory decision criteria that remained for considerable periods even after feedback had been altogether removed. Overall, these data demonstrate that incremental reinforcement-learning mechanisms influence the degree of caution subjects exercise when evaluating explicit memories.

  1. [Disturbances in time perception in relation to changes in experiencing music in experimental psychosis].

    PubMed

    Weber, K

    1977-01-01

    Music is a structure ('Gestalt') in time. The recognition of disturbances of the perception of music enhances the knowledge of disorders of perception of time. Disturbances of perception of music and time in experimental psychoses (psilocybine) are discussed in relation to the studies by Piaget on the development of the notion of time in childhood. The results allow a new interpretation of the disturbances of the perception of time in diencephalic disorders as described in the literature.

  2. Intact attentional processing but abnormal responding in M1 muscarinic receptor-deficient mice using an automated touchscreen method

    PubMed Central

    Bartko, Susan J.; Romberg, Carola; White, Benjamin; Wess, Jürgen; Bussey, Timothy J.; Saksida, Lisa M.

    2014-01-01

    Cholinergic receptors have been implicated in schizophrenia, Alzheimer’s disease, Parkinson’s disease, and Huntington’s disease. However, to better target therapeutically the appropriate receptor subsystems, we need to understand more about the functions of those subsystems. In the current series of experiments, we assessed the functional role of M1 receptors in cognition by testing M1 receptor-deficient mice (M1R−/−) on the five-choice serial reaction time test of attentional and response functions, carried out using a computer-automated touchscreen test system. In addition, we tested these mice on several tasks featuring learning, memory and perceptual challenges. An advantage of the touchscreen method is that each test in the battery is carried out in the same task setting, using the same types of stimuli, responses and feedback, thus providing a high level of control and task comparability. The surprising finding, given the predominance of the M1 receptor in cortex, was the complete lack of effect of M1 deletion on measures of attentional function per se. Moreover, M1R−/− mice performed relatively normally on tests of learning, memory and perception, although they were impaired in object recognition memory with, but not without an interposed delay interval. They did, however, show clear abnormalities on a variety of response measures: M1R−/− mice displayed fewer omissions, more premature responses, and increased perseverative responding compared to wild-types. These data suggest that M1R−/− mice display abnormal responding in the face of relatively preserved attention, learning and perception. PMID:21903112

  3. Distinct Effects of Memory Retrieval and Articulatory Preparation when Learning and Accessing New Word Forms

    PubMed Central

    Nora, Anni; Renvall, Hanna; Kim, Jeong-Young; Service, Elisabet; Salmelin, Riitta

    2015-01-01

    Temporal and frontal activations have been implicated in learning of novel word forms, but their specific roles remain poorly understood. The present magnetoencephalography (MEG) study examines the roles of these areas in processing newly-established word form representations. The cortical effects related to acquiring new phonological word forms during incidental learning were localized. Participants listened to and repeated back new word form stimuli that adhered to native phonology (Finnish pseudowords) or were foreign (Korean words), with a subset of the stimuli recurring four times. Subsequently, a modified 1-back task and a recognition task addressed whether the activations modulated by learning were related to planning for overt articulation, while parametrically added noise probed reliance on developing memory representations during effortful perception. Learning resulted in decreased left superior temporal and increased bilateral frontal premotor activation for familiar compared to new items. The left temporal learning effect persisted in all tasks and was strongest when stimuli were embedded in intermediate noise. In the noisy conditions, native phonotactics evoked overall enhanced left temporal activation. In contrast, the frontal learning effects were present only in conditions requiring overt repetition and were more pronounced for the foreign language. The results indicate a functional dissociation between temporal and frontal activations in learning new phonological word forms: the left superior temporal responses reflect activation of newly-established word-form representations, also during degraded sensory input, whereas the frontal premotor effects are related to planning for articulation and are not preserved in noise. PMID:25961571

  4. Distinct effects of memory retrieval and articulatory preparation when learning and accessing new word forms.

    PubMed

    Nora, Anni; Renvall, Hanna; Kim, Jeong-Young; Service, Elisabet; Salmelin, Riitta

    2015-01-01

    Temporal and frontal activations have been implicated in learning of novel word forms, but their specific roles remain poorly understood. The present magnetoencephalography (MEG) study examines the roles of these areas in processing newly-established word form representations. The cortical effects related to acquiring new phonological word forms during incidental learning were localized. Participants listened to and repeated back new word form stimuli that adhered to native phonology (Finnish pseudowords) or were foreign (Korean words), with a subset of the stimuli recurring four times. Subsequently, a modified 1-back task and a recognition task addressed whether the activations modulated by learning were related to planning for overt articulation, while parametrically added noise probed reliance on developing memory representations during effortful perception. Learning resulted in decreased left superior temporal and increased bilateral frontal premotor activation for familiar compared to new items. The left temporal learning effect persisted in all tasks and was strongest when stimuli were embedded in intermediate noise. In the noisy conditions, native phonotactics evoked overall enhanced left temporal activation. In contrast, the frontal learning effects were present only in conditions requiring overt repetition and were more pronounced for the foreign language. The results indicate a functional dissociation between temporal and frontal activations in learning new phonological word forms: the left superior temporal responses reflect activation of newly-established word-form representations, also during degraded sensory input, whereas the frontal premotor effects are related to planning for articulation and are not preserved in noise.

  5. A bacterial tyrosine phosphatase inhibits plant pattern recognition receptor activation

    USDA-ARS?s Scientific Manuscript database

    Perception of pathogen-associated molecular patterns (PAMPs) by surface-localised pattern-recognition receptors (PRRs) is a key component of plant innate immunity. Most known plant PRRs are receptor kinases and initiation of PAMP-triggered immunity (PTI) signalling requires phosphorylation of the PR...

  6. Distributed Recognition of Natural Songs by European Starlings

    ERIC Educational Resources Information Center

    Knudsen, Daniel; Thompson, Jason V.; Gentner, Timothy Q.

    2010-01-01

    Individual vocal recognition behaviors in songbirds provide an excellent framework for the investigation of comparative psychological and neurobiological mechanisms that support the perception and cognition of complex acoustic communication signals. To this end, the complex songs of European starlings have been studied extensively. Yet, several…

  7. Hazard recognition in mining: A psychological perspective. Information circular/1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perdue, C.W.; Kowalski, K.M.; Barrett, E.A.

    1995-07-01

    This U.S. Bureau of Mines report considers, from a psychological perspective, the perceptual process by which miners recognize and respond to mining hazards. It proposes that if the hazard recognition skills of miners can be improved, mining accidents may be reduced to a significant degree. Prior studies of hazard perception in mining are considered, as are relevant studies from investigations of military target identification, pilot and gunnery officer training, transportation safety, automobile operator behavior, as well as research into sensory functioning and visual information processing. A general model of hazard perception is introduced, and selected concepts from the psychology ofmore » perception that are applicable to the detection of mining hazards are reviewed. Hazard recognition is discussed as a function of the perceptual cues available to the miner as well as the cognitive resources and strategies employed by the miner. The development of expertise in resonding to hazards is related to individual differences in the experience, aptitude, and personality of the worker. Potential applications to miner safety and training are presented.« less

  8. Clinical evaluation of music perception, appraisal and experience in cochlear implant users.

    PubMed

    Drennan, Ward R; Oleson, Jacob J; Gfeller, Kate; Crosson, Jillian; Driscoll, Virginia D; Won, Jong Ho; Anderson, Elizabeth S; Rubinstein, Jay T

    2015-02-01

    The objectives were to evaluate the relationships among music perception, appraisal, and experience in cochlear implant users in multiple clinical settings and to examine the viability of two assessments designed for clinical use. Background questionnaires (IMBQ) were administered by audiologists in 14 clinics in the United States and Canada. The CAMP included tests of pitch-direction discrimination, and melody and timbre recognition. The IMBQ queried users on prior musical involvement, music listening habits pre and post implant, and music appraisals. One-hundred forty-five users of Advanced Bionics and Cochlear Ltd cochlear implants. Performance on pitch direction discrimination, melody recognition, and timbre recognition tests were consistent with previous studies with smaller cohorts, as well as with more extensive protocols conducted in other centers. Relationships between perceptual accuracy and music enjoyment were weak, suggesting that perception and appraisal are relatively independent for CI users. Perceptual abilities as measured by the CAMP had little to no relationship with music appraisals and little relationship with musical experience. The CAMP and IMBQ are feasible for routine clinical use, providing results consistent with previous thorough laboratory-based investigations.

  9. Identifying and detecting facial expressions of emotion in peripheral vision.

    PubMed

    Smith, Fraser W; Rossit, Stephanie

    2018-01-01

    Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.

  10. Identifying and detecting facial expressions of emotion in peripheral vision

    PubMed Central

    Rossit, Stephanie

    2018-01-01

    Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus. PMID:29847562

  11. Black Ink and Red Ink (BIRI) Testing: A Testing Method to Evaluate Both Recall and Recognition Learning in Accelerated Adult-Learning Courses

    ERIC Educational Resources Information Center

    Rodgers, Joseph Lee; Rodgers, Jacci L.

    2011-01-01

    We propose, develop, and evaluate the black ink-red ink (BIRI) method of testing. This approach uses two different methods within the same test administration setting, one that matches recognition learning and the other that matches recall learning. Students purposively define their own tradeoff between the two approaches. Evaluation of the method…

  12. Analysis of differences in exercise recognition by constraints on physical activity of hospitalized cancer patients based on their medical history.

    PubMed

    Choi, Mi-Ri; Jeon, Sang-Wan; Yi, Eun-Surk

    2018-04-01

    The purpose of this study is to analyze the differences among the hospitalized cancer patients on their perception of exercise and physical activity constraints based on their medical history. The study used questionnaire survey as measurement tool for 194 cancer patients (male or female, aged 20 or older) living in Seoul metropolitan area (Seoul, Gyeonggi, Incheon). The collected data were analyzed using frequency analysis, exploratory factor analysis, reliability analysis t -test, and one-way distribution using statistical program SPSS 18.0. The following results were obtained. First, there was no statistically significant difference between cancer stage and exercise recognition/physical activity constraint. Second, there was a significant difference between cancer stage and sociocultural constraint/facility constraint/program constraint. Third, there was a significant difference between cancer operation history and physical/socio-cultural/facility/program constraint. Fourth, there was a significant difference between cancer operation history and negative perception/facility/program constraint. Fifth, there was a significant difference between ancillary cancer treatment method and negative perception/facility/program constraint. Sixth, there was a significant difference between hospitalization period and positive perception/negative perception/physical constraint/cognitive constraint. In conclusion, this study will provide information necessary to create patient-centered healthcare service system by analyzing exercise recognition of hospitalized cancer patients based on their medical history and to investigate the constraint factors that prevents patients from actually making efforts to exercise.

  13. Schematic Influences on Category Learning and Recognition Memory

    ERIC Educational Resources Information Center

    Sakamoto, Yasuaki; Love, Bradley C.

    2004-01-01

    The results from 3 category learning experiments suggest that items are better remembered when they violate a salient knowledge structure such as a rule. The more salient the knowledge structure, the stronger the memory for deviant items. The effect of learning errors on subsequent recognition appears to be mediated through the imposed knowledge…

  14. Spaced Learning Enhances Subsequent Recognition Memory by Reducing Neural Repetition Suppression

    ERIC Educational Resources Information Center

    Xue, Gui; Mei, Leilei; Chen, Chuansheng; Lu, Zhong-Lin; Poldrack, Russell; Dong, Qi

    2011-01-01

    Spaced learning usually leads to better recognition memory as compared with massed learning, yet the underlying neural mechanisms remain elusive. One open question is whether the spacing effect is achieved by reducing neural repetition suppression. In this fMRI study, participants were scanned while intentionally memorizing 120 novel faces, half…

  15. Learning and Recognition in Health and Care Work: An Inter-Subjective Perspective

    ERIC Educational Resources Information Center

    Liveng, Anne

    2010-01-01

    Purpose: The purpose of this paper is to discuss the role of recognition in learning processes among female nurses, social and health care assistants and occupational therapists working with people with dementia and other age-related illnesses. Design/methodology/approach: The paper highlights the need to experience recognizing learning spaces…

  16. Achieving Our Potential: An Action Plan for Prior Learning Assessment and Recognition (PLAR) in Canada

    ERIC Educational Resources Information Center

    Morrissey, Mary; Myers, Douglas; Belanger, Paul; Robitaille, Magali; Davison, Phil; Van Kleef, Joy; Williams, Rick

    2008-01-01

    This comprehensive publication assesses the status of prior learning assessment and recognition (PLAR) across Canada and offers insights and recommendations into the processes necessary for employers, post-secondary institutions and government to recognize and value experiential and informal learning. Acknowledging economic trends in Canada's job…

  17. Test-Enhanced Learning of Natural Concepts: Effects on Recognition Memory, Classification, and Metacognition

    ERIC Educational Resources Information Center

    Jacoby, Larry L.; Wahlheim, Christopher N.; Coane, Jennifer H.

    2010-01-01

    Three experiments examined testing effects on learning of natural concepts and metacognitive assessments of such learning. Results revealed that testing enhanced recognition memory and classification accuracy for studied and novel exemplars of bird families on immediate and delayed tests. These effects depended on the balance of study and test…

  18. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition

    PubMed Central

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-01-01

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824

  19. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition.

    PubMed

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-06-13

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).

  20. Selective visual attention and motivation: the consequences of value learning in an attentional blink task.

    PubMed

    Raymond, Jane E; O'Brien, Jennifer L

    2009-08-01

    Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.

  1. Multidisciplinary Perspectives on Military Deception

    DTIC Science & Technology

    1980-05-01

    Bruner and Leo Postman, "On the Perception of Incongruity: A Paradigm," Perception and Personality: A Symposium, ed. Jerome S. Bruner and David Krech... Jerome S. and Postman, Leo. "On the Perception of Incongruity: A Paradigm," in Perception and Personality: A Symposium, eds. Jerome S. Bruner and...David Krech. New York: Greenwood Press, 1968. Bruner , Jerome S. and Potter, Mary C. "Interference in Visual Recognition," Science, 144 (1964), 424-425

  2. Perceptual Plasticity for Auditory Object Recognition

    PubMed Central

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524

  3. Adult Learners' Learning Environment Perceptions and Satisfaction in Formal Education--Case Study of Four East-European Countries

    ERIC Educational Resources Information Center

    Radovan, Marko; Makovec, Danijela

    2015-01-01

    The purpose of this paper is to explore attitudes towards learning and perceptions of the learning environment. Our theoretical examination is based on the social-cognitive theory of motivation and research that emphasizes the connections between an individual's perceptions of the learning environment and his/her motivation, interest, attitudes…

  4. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  5. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  6. Implementing Collaborative Learning in Prelicensure Nursing Curricula: Student Perceptions and Learning Outcomes.

    PubMed

    Schoening, Anne M; Selde, M Susan; Goodman, Joely T; Tow, Joyce C; Selig, Cindy L; Wichman, Chris; Cosimano, Amy; Galt, Kimberly A

    2015-01-01

    This study evaluated learning outcomes and student perceptions of collaborative learning in an undergraduate nursing program. Participants in this 3-phase action research study included students enrolled in a traditional and an accelerated nursing program. The number of students who passed the unit examination was not significantly different between the 3 phases. Students had positive and negative perceptions about the use of collaborative learning.

  7. Dissociation of rapid response learning and facilitation in perceptual and conceptual networks of person recognition.

    PubMed

    Valt, Christian; Klein, Christoph; Boehm, Stephan G

    2015-08-01

    Repetition priming is a prominent example of non-declarative memory, and it increases the accuracy and speed of responses to repeatedly processed stimuli. Major long-hold memory theories posit that repetition priming results from facilitation within perceptual and conceptual networks for stimulus recognition and categorization. Stimuli can also be bound to particular responses, and it has recently been suggested that this rapid response learning, not network facilitation, provides a sound theory of priming of object recognition. Here, we addressed the relevance of network facilitation and rapid response learning for priming of person recognition with a view to advance general theories of priming. In four experiments, participants performed conceptual decisions like occupation or nationality judgments for famous faces. The magnitude of rapid response learning varied across experiments, and rapid response learning co-occurred and interacted with facilitation in perceptual and conceptual networks. These findings indicate that rapid response learning and facilitation in perceptual and conceptual networks are complementary rather than competing theories of priming. Thus, future memory theories need to incorporate both rapid response learning and network facilitation as individual facets of priming. © 2014 The British Psychological Society.

  8. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  9. An exploratory study about meaningful work in acute care nursing.

    PubMed

    Pavlish, Carol; Hunt, Roberta

    2012-01-01

    To develop deeper understandings about nurses' perceptions of meaningful work and the contextual factors that impact finding meaning in work. Much has been written about nurses' job satisfaction and the impact on quality of health care. However, scant qualitative evidence exists regarding nurses' perceptions of meaningful work and how factors in the work environment influence their perceptions. The literature reveals links among work satisfaction, retention, quality of care, and meaningfulness in work. Using a narrative design, researchers interviewed 13 public health nurses and 13 acute care nurses. Categorical-content analysis with Atlas.ti data management software was conducted separately for each group of nurses. This article reports results for acute care nurses. Twenty-four stories of meaningful moments were analyzed and categorized. Three primary themes of meaningful work emerged: connections, contributions, and recognition. Participants described learning-focused environment, teamwork, constructive management, and time with patients as facilitators of meaningfulness and task-focused environment, stressful relationships, and divisive management as barriers. Meaningful nursing roles were advocate, catalyst and guide, and caring presence. Nurse administrators are the key to improving quality of care by nurturing opportunities for nurses to find meaning and satisfaction in their work. Study findings provide nurse leaders with new avenues for improving work environments and job satisfaction to potentially enhance healthcare outcomes. © 2012 Wiley Periodicals, Inc.

  10. Benefits for Voice Learning Caused by Concurrent Faces Develop over Time.

    PubMed

    Zäske, Romi; Mühl, Constanze; Schweinberger, Stefan R

    2015-01-01

    Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers' faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., "face-overshadowing". In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.

  11. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.

  12. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  13. Speech perception as an active cognitive process

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processing with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or therapy. PMID:24672438

  14. Implicit and Explicit Contributions to Object Recognition: Evidence from Rapid Perceptual Learning

    PubMed Central

    Hassler, Uwe; Friese, Uwe; Gruber, Thomas

    2012-01-01

    The present study investigated implicit and explicit recognition processes of rapidly perceptually learned objects by means of steady-state visual evoked potentials (SSVEP). Participants were initially exposed to object pictures within an incidental learning task (living/non-living categorization). Subsequently, degraded versions of some of these learned pictures were presented together with degraded versions of unlearned pictures and participants had to judge, whether they recognized an object or not. During this test phase, stimuli were presented at 15 Hz eliciting an SSVEP at the same frequency. Source localizations of SSVEP effects revealed for implicit and explicit processes overlapping activations in orbito-frontal and temporal regions. Correlates of explicit object recognition were additionally found in the superior parietal lobe. These findings are discussed to reflect facilitation of object-specific processing areas within the temporal lobe by an orbito-frontal top-down signal as proposed by bi-directional accounts of object recognition. PMID:23056558

  15. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  16. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    ERIC Educational Resources Information Center

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  17. Modeling Human Visual Perception for Target Detection in Military Simulations

    DTIC Science & Technology

    2009-06-01

    incorrectly, is a subject for future research. Possibly, one could exploit the Recognition-by-Components theory of Biederman (1987) and decompose the...Psychophysiscs, 55, 485-496. Biederman , I. (1987). Recognition-by-components: A theory of human image understand- ing. Psychological Review, 94, 115-147

  18. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts.

    PubMed

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2016-06-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts

    PubMed Central

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892

  20. Tracking the truth: the effect of face familiarity on eye fixations during deception.

    PubMed

    Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert

    2017-05-01

    In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.

  1. Multilevel Predictors of Differing Perceptions of Assessment for Learning Practices between Teachers and Students

    ERIC Educational Resources Information Center

    Pat-El, Ron Jonathan; Tillema, Harm; Segers, Mien; Vedder, Paul

    2015-01-01

    Assessment for Learning (AfL), as a way to promote learning, requires a "match" or a shared focus between student and teacher to be effective. But students and teachers may differ in their perceptions of the purpose and process of classroom assessment meant to promote learning. Perceptions regarding AfL practices in their classroom were…

  2. Learning through hand- or typewriting influences visual recognition of new graphic shapes: behavioral and functional imaging evidence.

    PubMed

    Longcamp, Marieke; Boucard, Céline; Gilhodes, Jean-Claude; Anton, Jean-Luc; Roth, Muriel; Nazarian, Bruno; Velay, Jean-Luc

    2008-05-01

    Fast and accurate visual recognition of single characters is crucial for efficient reading. We explored the possible contribution of writing memory to character recognition processes. We evaluated the ability of adults to discriminate new characters from their mirror images after being taught how to produce the characters either by traditional pen-and-paper writing or with a computer keyboard. After training, we found stronger and longer lasting (several weeks) facilitation in recognizing the orientation of characters that had been written by hand compared to those typed. Functional magnetic resonance imaging recordings indicated that the response mode during learning is associated with distinct pathways during recognition of graphic shapes. Greater activity related to handwriting learning and normal letter identification was observed in several brain regions known to be involved in the execution, imagery, and observation of actions, in particular, the left Broca's area and bilateral inferior parietal lobules. Taken together, these results provide strong arguments in favor of the view that the specific movements memorized when learning how to write participate in the visual recognition of graphic shapes and letters.

  3. Neuropeptide Trefoil factor 3 improves learning and retention of novel object recognition memory in mice.

    PubMed

    Shi, Hai-Shui; Yin, Xi; Song, Li; Guo, Qing-Jun; Luo, Xiang-Heng

    2012-02-01

    Accumulating evidence has implicated neuropeptides in modulating recognition, learning and memory. However, to date, no study has investigated the effects of neuropeptide Trefoil factor 3 (TFF3) on the process of learning and memory. In the present study, we evaluated the acute effects of TFF3 administration (0.1 and 0.5mg/kg, i.p.) on the acquisition and retention of object recognition memory in mice. We found that TFF3 administration significantly enhanced both short-term and long-term memory during the retention test, conducted 90 min and 24h after training respectively. Remarkably, acute TFF3 administration transformed a learning event that would not normally result in long-term memory into an event retained for a long-term period and produced no effect on locomotor activity in mice. In conclusion, the present results provide an important role of TFF3 in improving object recognition memory and reserving it for a longer time, which suggests a potential therapeutic application for diseases with recognition and memory impairment. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. A Follow-Up Study on Music and Lexical Tone Perception in Adult Mandarin-Speaking Cochlear Implant Users.

    PubMed

    Gu, Xin; Liu, Bo; Liu, Ziye; Qi, Beier; Wang, Shuo; Dong, Ruijuan; Chen, Xueqing; Zhou, Qian

    2017-12-01

    The aim was to evaluate the development of music and lexical tone perception in Mandarin-speaking adult cochlear implant (CI) users over a period of 1 year. Prospective patient series. Tertiary hospital and research institute. Twenty five adult CI users, with ages ranging from 19 to 75 years old, participated in a year-long follow-up evaluation. There were also 40 normal hearing adult subjects who participated as a control group to provide the normal value range. Musical sounds in cochlear implants (Mu.S.I.C.) test battery was undertaken to evaluate music perception ability. Mandarin Tone Identification in Noise Test (M-TINT) was used to assess lexical tone recognition. The tests for CI users were completed at 1, 3, 6, and 12 months after the CI switch-on. Quantitative and statistical analysis of their results from music and tone perception tests. The performance of music perception and tone recognition both demonstrated an overall improvement in outcomes during the entire 1-year follow-up process. The increasing trends were obvious in the early period especially in the first 6 months after switch-on. There was a significant improvement in the melody discrimination (p < 0.01), timbre identification (p < 0.001), tone recognition in quiet (p < 0.0001), and in noise (p < 0.0001). Adult Mandarin-speaking CI users show an increasingly improved performance on music and tone perception during the 1-year follow-up. The improvement was the most prominent in the first 6 months of CI use. It is essential to strengthen the rehabilitation training within the first 6 months.

  5. Smartphone-Based Patients' Activity Recognition by Using a Self-Learning Scheme for Medical Monitoring.

    PubMed

    Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong

    2016-06-01

    Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges.

  6. The Effect of Automatic Speech Recognition Eyespeak Software on Iraqi Students' English Pronunciation: A Pilot Study

    ERIC Educational Resources Information Center

    Sidgi, Lina Fathi Sidig; Shaari, Ahmad Jelani

    2017-01-01

    The use of technology, such as computer-assisted language learning (CALL), is used in teaching and learning in the foreign language classrooms where it is most needed. One promising emerging technology that supports language learning is automatic speech recognition (ASR). Integrating such technology, especially in the instruction of pronunciation…

  7. Validity of Assessment and Recognition of Non-Formal and Informal Learning Achievements in Higher Education

    ERIC Educational Resources Information Center

    Kaminskiene, Lina; Stasiunaitiene, Egle

    2013-01-01

    The article identifies the validity of assessment of non-formal and informal learning achievements (NILA) as one of the key factors for encouraging further development of the process of assessing and recognising non-formal and informal learning achievements in higher education. The authors analyse why the recognition of non-formal and informal…

  8. Adjusting the Fulcrum: How Prior Learning Is Recognized and Regarded in University Adult Education Contexts

    ERIC Educational Resources Information Center

    Kawalilak, Colleen; Wihak, Wihak

    2013-01-01

    Prior Learning Assessment and Recognition (PLAR) offers adults formal recognition for learning obtained through non-formal and informal means. The practice reflects both equity and economic development concerns (Keeton, 2000). In the field of Adult Education as a formal study, however, tensions exist between honouring the learner and honouring the…

  9. Undergraduate Students' Perceptions of Collaborative Learning in a Differential Equations Mathematics Course

    ERIC Educational Resources Information Center

    Hajra, Sayonita Ghosh; Das, Ujjaini

    2015-01-01

    This paper uses collaborative learning strategies to examine students' perceptions in a differential equations mathematics course. Students' perceptions were analyzed using three collaborative learning strategies including collaborative activity, group-quiz and online discussion. The study results show that students identified both strengths and…

  10. Knowledge About Sounds—Context-Specific Meaning Differently Activates Cortical Hemispheres, Auditory Cortical Fields, and Layers in House Mice

    PubMed Central

    Geissler, Diana B.; Schmidt, H. Sabine; Ehret, Günter

    2016-01-01

    Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition. PMID:27013959

  11. Transformations in the Recognition of Visual Forms

    ERIC Educational Resources Information Center

    Charness, Neil; Bregman, Albert S.

    1973-01-01

    In a study which required college students to learn to recognize four flexible plastic shapes photographed on different backgrounds from different angles, the importance of a context-rich environment for the learning and recognition of visual patterns was illustrated. (Author)

  12. A framework for recognition of prior learning within a Postgraduate Diploma of Nursing Management in South Africa.

    PubMed

    Jooste, Karien; Jasper, Melanie

    2010-09-01

    The present study focuses on the development of an initial framework to guide educators in nursing management in designing a portfolio for the recognition of prior learning for accreditation of competencies within a postgraduate diploma in South Africa. In South Africa, there is a unique educational need, arising from the legacy of apartheid and previous political regimes, to facilitate educational development in groups previously unable to access higher education. Awareness of the need for continuous professional development in nursing management practice and recognition of prior learning in the educational environment has presented the possibility of using one means to accomplish both aims. Although the content of the present study is pertinent to staff development of nurse managers, it is primarily written for nurse educators in the field of nursing management. The findings identify focus areas to be addressed in a recognition of prior learning portfolio to comply with the programme specific outcomes of Nursing Service Management. Further work to refine these focus areas to criteria that specify the level of performance required to demonstrate achievement is needed. CONCLUSION AND IMPLICATIONS FOR NURSE MANAGERS: Managers need to facilitate continuous professional development through portfolio compilation which acknowledges the learning opportunities within the workplace and can be used as recognition of prior learning. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.

  13. How do robots take two parts apart

    NASA Technical Reports Server (NTRS)

    Bajcsy, Ruzena K.; Tsikos, Constantine J.

    1989-01-01

    This research is a natural progression of efforts which begun with the introduction of a new research paradigm in machine perception, called Active Perception. There it was stated that Active Perception is a problem of intelligent control strategies applied to data acquisition processes which will depend on the current state of the data interpretation, including recognition. The disassembly/assembly problem is treated as an Active Perception problem, and a method for autonomous disassembly based on this framework is presented.

  14. Project PAVE (Personality And Vision Experimentation): role of personal and interpersonal resilience in the perception of emotional facial expression

    PubMed Central

    Tanzer, Michal; Shahar, Golan; Avidan, Galia

    2014-01-01

    The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one’s facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals’ personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE) and perceived social support (PSS), with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed. PMID:25165439

  15. Self perception of empathy in schizophrenia: emotion recognition, insight, and symptoms predict degree of self and interviewer agreement.

    PubMed

    Lysaker, Paul H; Hasson-Ohayon, Ilanit; Kravetz, Shlomo; Kent, Jerillyn S; Roe, David

    2013-04-30

    Many with schizophrenia have been found to experience difficulties recognizing a range of their own mental states including memories and emotions. While there is some evidence that the self perception of empathy in schizophrenia is often at odds with objective observations, little is known about the correlates of rates of concordance between self and rater assessments of empathy for this group. To explore this issue we gathered self and rater assessments of empathy in addition to assessments of emotion recognition using the Bell Lysaker Emotion Recognition Task, insight using the Scale to Assess Unawareness of Mental Disorder, and symptoms using the Positive and Negative Syndrome Scale from 91 adults diagnosed with schizophrenia spectrum disorders. Results revealed that participants with better emotion recognition, better insight, fewer positive symptoms and fewer depressive symptoms produced self ratings of empathy which were more strongly correlated with assessments of empathy performed by raters than participants with greater deficits in these domains. Results suggest that deficits in emotion recognition along with poor insight and higher levels of positive and depressive symptoms may affect the degree of agreement between self and rater assessments of empathy in schizophrenia. Published by Elsevier Ireland Ltd.

  16. The effectiveness of music as a mnemonic device on recognition memory for people with multiple sclerosis.

    PubMed

    Moore, Kimberly Sena; Peterson, David A; O'Shea, Geoffrey; McIntosh, Gerald C; Thaut, Michael H

    2008-01-01

    Research shows that people with multiple sclerosis exhibit learning and memory difficulties and that music can be used successfully as a mnemonic device to aid in learning and memory. However, there is currently no research investigating the effectiveness of music mnemonics as a compensatory learning strategy for people with multiple sclerosis. Participants with clinically definitive multiple sclerosis (N = 38) were given a verbal learning and memory test. Results from a recognition memory task were analyzed that compared learning through music (n = 20) versus learning through speech (n = 18). Preliminary baseline neuropsychological data were collected that measured executive functioning skills, learning and memory abilities, sustained attention, and level of disability. An independent samples t test showed no significant difference between groups on baseline neuropsychological functioning or on recognition task measures. Correlation analyses suggest that music mnemonics may facilitate learning for people who are less impaired by the disease. Implications for future research are discussed.

  17. Veterinary students' perceptions of their learning environment as measured by the Dundee Ready Education Environment Measure.

    PubMed

    Pelzer, Jacquelyn M; Hodgson, Jennifer L; Werre, Stephen R

    2014-03-24

    The Dundee Ready Education Environment Measure (DREEM) has been widely used to evaluate the learning environment within health sciences education, however, this tool has not been applied in veterinary medical education. The aim of this study was to evaluate the reliability and validity of the DREEM tool in a veterinary medical program and to determine veterinary students' perceptions of their learning environment. The DREEM is a survey tool which quantitatively measures students' perceptions of their learning environment. The survey consists of 50 items, each scored 0-4 on a Likert Scale. The 50 items are subsequently analysed within five subscales related to students' perceptions of learning, faculty (teachers), academic atmosphere, and self-perceptions (academic and social). An overall score is obtained by summing the mean score for each subscale, with an overall possible score of 200. All students in the program were asked to complete the DREEM. Means and standard deviations were calculated for the 50 items, the five subscale scores and the overall score. Cronbach's alpha was determined for the five subscales and overall score to evaluate reliability. Confirmatory factor analysis was used to evaluate construct validity. 224 responses (53%) were received. The Cronbach's alpha for the overall score was 0.93 and for the five subscales were; perceptions of learning 0.85, perceptions of faculty 0.79, perceptions of atmosphere 0.81, academic self-perceptions 0.68, and social self-perceptions 0.72. Construct validity was determined to be acceptable (p < 0.001) and all items contributed to the overall validity of the DREEM. The overall DREEM score was 128.9/200, which is a positive result based on the developers' descriptors and comparable to other health science education programs. Four individual items of concern were identified by students. In this setting the DREEM was a reliable and valid tool to measure veterinary students' perceptions of their learning environment. The four items identified as concerning originated from four of the five subscales, but all related to workload. Negative perceptions regarding workload is a common concern of students in health education programs. If not addressed, this perception may have an unfavourable impact on veterinary students' learning environment.

  18. Not on the Face Alone: Perception of Contextualized Face Expressions in Huntington's Disease

    ERIC Educational Resources Information Center

    Aviezer, Hillel; Bentin, Shlomo; Hassin, Ran R.; Meschino, Wendy S.; Kennedy, Jeanne; Grewal, Sonya; Esmail, Sherali; Cohen, Sharon; Moscovitch, Morris

    2009-01-01

    Numerous studies have demonstrated that Huntington's disease mutation-carriers have deficient explicit recognition of isolated facial expressions. There are no studies, however, which have investigated the recognition of facial expressions embedded within an emotional body and scene context. Real life facial expressions are typically embedded in…

  19. Emotion Understanding in Children with ADHD

    ERIC Educational Resources Information Center

    Da Fonseca, David; Seguier, Valerie; Santos, Andreia; Poinso, Francois; Deruelle, Christine

    2009-01-01

    Several studies suggest that children with ADHD tend to perform worse than typically developing children on emotion recognition tasks. However, most of these studies have focused on the recognition of facial expression, while there is evidence that context plays a major role on emotion perception. This study aims at further investigating emotion…

  20. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  1. Evaluation of Intervention Reach on a Citywide Health Behavior Change Campaign: Cross-Sectional Study Results

    ERIC Educational Resources Information Center

    Shimazaki, Takashi; Takenaka, Koji

    2015-01-01

    Little is known about dissemination strategies that contribute to health information recognition. This study examined (a) health campaign exposure and awareness (slogan and logo recognition); (b) perceived communication channels; (c) differences between perceptions of researcher-developed and enhancement community health information materials; and…

  2. Cognitive Development and Reading Processes. Developmental Program Report Number 76.

    ERIC Educational Resources Information Center

    West, Richard F.

    In discussing the relationship between cognitive development (perception, pattern recognition, and memory) and reading processes, this paper especially emphasizes developmental factors. After an overview of some issues that bear on how written language is processed, the paper presents a discussion of pattern recognition, including general pattern…

  3. Brain regions and functional interactions supporting early word recognition in the face of input variability.

    PubMed

    Benavides-Varela, Silvia; Siugzdaite, Roma; Gómez, David Maximiliano; Macagno, Francesco; Cattarossi, Luigi; Mehler, Jacques

    2017-07-18

    Perception and cognition in infants have been traditionally investigated using habituation paradigms, assuming that babies' memories in laboratory contexts are best constructed after numerous repetitions of the very same stimulus in the absence of interference. A crucial, yet open, question regards how babies deal with stimuli experienced in a fashion similar to everyday learning situations-namely, in the presence of interfering stimuli. To address this question, we used functional near-infrared spectroscopy to test 40 healthy newborns on their ability to encode words presented in concomitance with other words. The results evidenced a habituation-like hemodynamic response during encoding in the left-frontal region, which was associated with a progressive decrement of the functional connections between this region and the left-temporal, right-temporal, and right-parietal regions. In a recognition test phase, a characteristic neural signature of recognition recruited first the right-frontal region and subsequently the right-parietal ones. Connections originating from the right-temporal regions to these areas emerged when newborns listened to the familiar word in the test phase. These findings suggest a neural specialization at birth characterized by the lateralization of memory functions: the interplay between temporal and left-frontal regions during encoding and between temporo-parietal and right-frontal regions during recognition of speech sounds. Most critically, the results show that newborns are capable of retaining the sound of specific words despite hearing other stimuli during encoding. Thus, habituation designs that include various items may be as effective for studying early memory as repeated presentation of a single word.

  4. Perception of resyllabification in French.

    PubMed

    Gaskell, M Gareth; Spinelli, Elsa; Meunier, Fanny

    2002-07-01

    In three experiments, we examined the effects of phonological resyllabification processes on the perception of French speech. Enchainment involves the resyllabification of a word-final consonant across a syllable boundary (e.g., in chaque avion, the /k/ crosses the syllable boundary to become syllable initial). Liaison involves a further process of realization of a latent consonant, alongside resyllabification (e.g., the /t/ in petit avion). If the syllable is a dominant unit of perception in French (Mehler, Dommergues, Frauenfelder, & Segui, 1981), these processes should cause problems for recognition of the following word. A cross-modal priming experiment showed no cost attached to either type of resyllabification in terms of reduced activation of the following word. Furthermore, word- and sequence-monitoring experiments again showed no cost and suggested that the recognition of vowel-initial words may be facilitated when they are preceded by a word that had undergone resyllabification through enchainment or liaison. We examine the sources of information that could underpin facilitation and propose a refinement of the syllable's role in the perception of French speech.

  5. Learner Behaviors and Perceptions of Autonomous Language Learning

    ERIC Educational Resources Information Center

    Bekleyen, Nilüfer; Selimoglu, Figen

    2016-01-01

    The purpose of the present study was to investigate the learners' behaviors and perceptions about autonomous language learning at the university level in Turkey. It attempts to reveal what type of perceptions learners held regarding teachers' and their own responsibilities in the language learning process. Their autonomous language learning…

  6. Employees' Perception toward the Dimension of Culture in Enhancing Organizational Learning

    ERIC Educational Resources Information Center

    Graham, Carroll M.; Nafukho, Fredrick Muyia

    2007-01-01

    Purpose: The purpose of this study is to determine employees' perception of the dimension of culture toward organizational learning readiness. The study also seeks to compare employees' work experience (longevity), work shifts and their perception toward the dimension of culture in enhancing organizational learning readiness.…

  7. EFL Teachers' Perception of University Students' Motivation and ESP Learning Achievement

    ERIC Educational Resources Information Center

    Dja'far, Veri Hardinansyah; Cahyono, Bambang Yudi; Bashtomi, Yazid

    2016-01-01

    This research aimed at examining Indonesian EFL Teachers' perception of students' motivation and English for Specific Purposes (ESP) learning achievement. It also explored the strategies applied by teachers based on their perception of students' motivation and ESP learning achievement. This research involved 204 students who took English for…

  8. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    PubMed

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Medical Student Perceptions of the Learning Environment in Medical School Change as Students Transition to Clinical Training in Undergraduate Medical School.

    PubMed

    Dunham, Lisette; Dekhtyar, Michael; Gruener, Gregory; CichoskiKelly, Eileen; Deitz, Jennifer; Elliott, Donna; Stuber, Margaret L; Skochelak, Susan E

    2017-01-01

    Phenomenon: The learning environment is the physical, social, and psychological context in which a student learns. A supportive learning environment contributes to student well-being and enhances student empathy, professionalism, and academic success, whereas an unsupportive learning environment may lead to burnout, exhaustion, and cynicism. Student perceptions of the medical school learning environment may change over time and be associated with students' year of training and may differ significantly depending on the student's gender or race/ethnicity. Understanding the changes in perceptions of the learning environment related to student characteristics and year of training could inform interventions that facilitate positive experiences in undergraduate medical education. The Medical School Learning Environment Survey (MSLES) was administered to 4,262 students who matriculated at one of 23 U.S. and Canadian medical schools in 2010 and 2011. Students completed the survey at the end of each year of medical school as part of a battery of surveys in the Learning Environment Study. A mixed-effects longitudinal model, t tests, Cohen's d effect size, and analysis of variance assessed the relationship between MSLES score, year of training, and demographic variables. After controlling for gender, race/ethnicity, and school, students reported worsening perceptions toward the medical school learning environment, with the worst perceptions in the 3rd year of medical school as students begin their clinical experiences, and some recovery in the 4th year after Match Day. The drop in MSLES scores associated with the transition to the clinical learning environment (-0.26 point drop in addition to yearly change, effect size = 0.52, p < .0001) is more than 3 times greater than the drop between the 1st and 2nd year (0.07 points, effect size = 0.14, p < .0001). The largest declines were from items related to work-life balance and informal student relationships. There was some, but not complete, recovery in perceptions of the medical school learning environment in the 4th year. Insights: Perceptions of the medical school learning environment worsen as students continue through medical school, with a stronger decline in perception scores as students' transition to the clinical learning environment. Students reported the greatest drop in finding time for outside activities and students helping one another in the 3rd year. Perceptions differed based on gender and race/ethnicity. Future studies should investigate the specific features of medical schools that contribute most significantly to student perceptions of the medical school learning environment, both positive and negative, to pinpoint potential interventions and improvements.

  10. Visual abilities are important for auditory-only speech recognition: evidence from autism spectrum disorder.

    PubMed

    Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina

    2014-12-01

    In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Ultrafast learning in a hard-limited neural network pattern recognizer

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1996-03-01

    As we published in the last five years, the supervised learning in a hard-limited perceptron system can be accomplished in a noniterative manner if the input-output mapping to be learned satisfies a certain positive-linear-independency (or PLI) condition. When this condition is satisfied (for most practical pattern recognition applications, this condition should be satisfied,) the connection matrix required to meet this mapping can be obtained noniteratively in one step. Generally, there exist infinitively many solutions for the connection matrix when the PLI condition is satisfied. We can then select an optimum solution such that the recognition of any untrained patterns will become optimally robust in the recognition mode. The learning speed is very fast and close to real-time because the learning process is noniterative and one-step. This paper reports the theoretical analysis and the design of a practical charter recognition system for recognizing hand-written alphabets. The experimental result is recorded in real-time on an unedited video tape for demonstration purposes. It is seen from this real-time movie that the recognition of the untrained hand-written alphabets is invariant to size, location, orientation, and writing sequence, even the training is done with standard size, standard orientation, central location and standard writing sequence.

  12. Normal voice processing after posterior superior temporal sulcus lesion.

    PubMed

    Jiahui, Guo; Garrido, Lúcia; Liu, Ran R; Susilo, Tirta; Barton, Jason J S; Duchaine, Bradley

    2017-10-01

    The right posterior superior temporal sulcus (pSTS) shows a strong response to voices, but the cognitive processes generating this response are unclear. One possibility is that this activity reflects basic voice processing. However, several fMRI and magnetoencephalography findings suggest instead that pSTS serves as an integrative hub that combines voice and face information. Here we investigate whether right pSTS contributes to basic voice processing by testing Faith, a patient whose right pSTS was resected, with eight behavioral tasks assessing voice identity perception and recognition, voice sex perception, and voice expression perception. Faith performed normally on all the tasks. Her normal performance indicates right pSTS is not necessary for intact voice recognition and suggests that pSTS activations to voices reflect higher-level processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Perceptions by medical students of their educational environment for obstetrics and gynaecology in metropolitan and rural teaching sites.

    PubMed

    Carmody, Dianne F; Jacques, Angela; Denz-Penhey, Harriet; Puddey, Ian; Newnham, John P

    2009-12-01

    Medical student education in Western Australia is expanding to secondary level metropolitan hospitals and rural sites to accommodate workforce demands and increasing medical student numbers. To determine if students' perceptions of the teaching environment for obstetrics and gynaecology differ between tertiary, secondary level metropolitan hospitals and rural sites, and to determine if students' perceptions of their learning environment are associated with improved academic performance. An evaluation was conducted of medical students' perceptions of their learning environment during an obstetrics and gynaecology program at a variety of sites across metropolitan and rural Western Australia. The evaluation was based on the Dundee Ready Education Environmental Measure (DREEM) questionnaire. There were no significant differences in students' perceptions of their learning environment between the tertiary hospital, combined programs involving a tertiary and secondary metropolitan hospital, rural sites with a population of more than 25,000 and rural sites with a population less than 25,000 people. Perceptions were similar in male and female students. The overall mean score for all perceptions of the learning environment in obstetrics and gynaecology were in the range considered to be favorable. Higher scores of perceptions of the learning environment were associated positively with the measures of academic achievement in the clinical, but not written, examination. Medical students' perceptions of their learning environment in obstetrics and gynaecology were not influenced by the geographical site of delivery or their gender but were positively related to higher academic achievement. Providing appropriate academic and clinical support systems have been put in place the education of medical students can be extended outside major hospitals and into outer metropolitan and rural communities without any apparent reduction in perceptions of the quality of their learning environment.

  14. Global similarity predicts dissociation of classification and recognition: evidence questioning the implicit-explicit learning distinction in amnesia.

    PubMed

    Jamieson, Randall K; Holmes, Signy; Mewhort, D J K

    2010-11-01

    Dissociation of classification and recognition in amnesia is widely taken to imply 2 functional systems: an implicit procedural-learning system that is spared in amnesia and an explicit episodic-learning system that is compromised. We argue that both tasks reflect the global similarity of probes to memory. In classification, subjects sort unstudied grammatical exemplars from lures, whereas in recognition, they sort studied grammatical exemplars from lures. Hence, global similarity is necessarily greater in recognition than in classification. Moreover, a grammatical exemplar's similarity to studied exemplars is a nonlinear function of the integrity of the data in memory. Assuming that data integrity is better for control subjects than for subjects with amnesia, the nonlinear relation combined with the advantage for recognition over classification predicts the dissociation of recognition and classification. To illustrate the dissociation of recognition and classification in healthy undergraduates, we manipulated study time to vary the integrity of the data in memory and brought the dissociation under experimental control. We argue that the dissociation reflects a general cost in memory rather than a selective impairment of separate procedural and episodic systems. (c) 2010 APA, all rights reserved

  15. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  16. The Relationship between Language Anxiety, Interpretation of Anxiety, Intrinsic Motivation and the Use of Learning Strategies

    ERIC Educational Resources Information Center

    Nishitani, Mari; Matsuda, Toshiki

    2011-01-01

    Researches in language anxiety have focused on the level of language anxiety so far. This study instead, hypothesizes that the interpretation of anxiety and the recognition of failure have an impact on learning and investigates how language anxiety and intrinsic motivation affect the use of learning strategies through the recognition of failure.…

  17. Global Similarity Predicts Dissociation of Classification and Recognition: Evidence Questioning the Implicit-Explicit Learning Distinction in Amnesia

    ERIC Educational Resources Information Center

    Jamieson, Randall K.; Holmes, Signy; Mewhort, D. J. K.

    2010-01-01

    Dissociation of classification and recognition in amnesia is widely taken to imply 2 functional systems: an implicit procedural-learning system that is spared in amnesia and an explicit episodic-learning system that is compromised. We argue that both tasks reflect the global similarity of probes to memory. In classification, subjects sort…

  18. Learning-Dependent Changes of Associations between Unfamiliar Words and Perceptual Features: A 15-Day Longitudinal Study

    ERIC Educational Resources Information Center

    Kambara, Toshimune; Tsukiura, Takashi; Shigemune, Yayoi; Kanno, Akitake; Nouchi, Rui; Yomogida, Yukihito; Kawashima, Ryuta

    2013-01-01

    This study examined behavioral changes in 15-day learning of word-picture (WP) and word-sound (WS) associations, using meaningless stimuli. Subjects performed a learning task and two recognition tasks under the WP and WS conditions every day for 15 days. Two main findings emerged from this study. First, behavioral data of recognition accuracy and…

  19. Will Increasing Academic Recognition of Workplace Learning in the UK Reinforce Existing Gender Divisions in the Labour Market?

    ERIC Educational Resources Information Center

    Walsh, Anita

    2006-01-01

    In the United Kingdom there has been a considerable increase in the academic recognition of workplace learning, and a number of new awards drawing on workplace learning have been introduced. These include apprenticeships and Advanced Apprenticeships, both of which contain National Vocational Qualifications, and the Foundation Degree. In addition,…

  20. A Habermasian Analysis of a Process of Recognition of Prior Learning for Health Care Assistants

    ERIC Educational Resources Information Center

    Sandberg, Fredrik

    2012-01-01

    This article discusses a process of recognition of prior learning for accreditation of prior experiential learning to qualify for course credits used in an adult in-service education program for health care assistants at the upper-secondary level in Sweden. The data are based on interviews and observations drawn from a field study, and Habermas's…

  1. Motor-visual neurons and action recognition in social interactions.

    PubMed

    de la Rosa, Stephan; Bülthoff, Heinrich H

    2014-04-01

    Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions - namely, context-specific and contingency based learning.

  2. Recognition of Prior Learning: The Tensions between Its Inclusive Intentions and Constraints on Its Implementation

    ERIC Educational Resources Information Center

    Cooper, Linda; Ralphs, Alan; Harris, Judy

    2017-01-01

    This article provides some insight into the constraints on the potential of recognition of prior learning (RPL) to widen access to educational qualifications. Its focus is on a conceptual framework that emerged from a South African study of RPL practices across four different learning contexts. Working from a social realist perspective, it argues…

  3. Review of Speech-to-Text Recognition Technology for Enhancing Learning

    ERIC Educational Resources Information Center

    Shadiev, Rustam; Hwang, Wu-Yuin; Chen, Nian-Shing; Huang, Yueh-Min

    2014-01-01

    This paper reviewed literature from 1999 to 2014 inclusively on how Speech-to-Text Recognition (STR) technology has been applied to enhance learning. The first aim of this review is to understand how STR technology has been used to support learning over the past fifteen years, and the second is to analyze all research evidence to understand how…

  4. Applications of Speech-to-Text Recognition and Computer-Aided Translation for Facilitating Cross-Cultural Learning through a Learning Activity: Issues and Their Solutions

    ERIC Educational Resources Information Center

    Shadiev, Rustam; Wu, Ting-Ting; Sun, Ai; Huang, Yueh-Min

    2018-01-01

    In this study, 21 university students, who represented thirteen nationalities, participated in an online cross-cultural learning activity. The participants were engaged in interactions and exchanges carried out on Facebook® and Skype® platforms, and their multilingual communications were supported by speech-to-text recognition (STR) and…

  5. Ways of Seeing the Recognition of Prior Learning (RPL): What Contribution Can Such Practices Make to Social Inclusion?

    ERIC Educational Resources Information Center

    Harris, Judy

    1999-01-01

    Describes four models of recognition of prior learning (PL): (1) procrustean--PL is made to match predetermined standards; (2) learning and development--PL approximates implicit academic standards; (3) radical--subjective knowledge is recognized as an alternative to dominant forms; and (4) Trojan-horse--PL is seen as socially constructed and…

  6. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks.

    PubMed

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-07-05

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods.

  7. Knowledge-Linking Perceptions of Late-Elementary Students

    ERIC Educational Resources Information Center

    Schuh, Kathy L.; Kuo, Yi-Lung; Knupp, Tawnya L.

    2014-01-01

    This study describes student perceptions of potential elaborative or generative learning strategies called student knowledge links. This construct was assessed using the Student Knowledge Linking Instrument-Perceptions (SKLIP), a new learning inventory to measure late-elementary student perceptions of the creation of student knowledge links. After…

  8. Development of simulation-based learning programme for improving adherence to time-out protocol on high-risk invasive procedures outside of operating room.

    PubMed

    Jeong, Eun Ju; Chung, Hyun Soo; Choi, Jeong Yun; Kim, In Sook; Hong, Seong Hee; Yoo, Kyung Sook; Kim, Mi Kyoung; Won, Mi Yeol; Eum, So Yeon; Cho, Young Soon

    2017-06-01

    The aim of this study was to develop a simulation-based time-out learning programme targeted to nurses participating in high-risk invasive procedures and to figure out the effects of application of the new programme on acceptance of nurses. This study was performed using a simulation-based learning predesign and postdesign to figure out the effects of implementation of this programme. It was targeted to 48 registered nurses working in the general ward and the emergency department in a tertiary teaching hospital. Difference between acceptance and performance rates has been figured out by using mean, standard deviation, and Wilcoxon-signed rank test. The perception survey and score sheet have been validated through content validation index, and the reliability of evaluator has been verified by using intraclass correlation coefficient. Results showed high level of acceptance of high-risk invasive procedure (P<.01). Further, improvement was consistent regardless of clinical experience, workplace, or experience in simulation-based learning. The face validity of the programme showed over 4.0 out of 5.0. This simulation-based learning programme was effective in improving the recognition of time-out protocol and has given the participants the opportunity to become proactive in cases of high-risk invasive procedures performed outside of operating room. © 2017 John Wiley & Sons Australia, Ltd.

  9. Rapid effects of the G-protein coupled oestrogen receptor (GPER) on learning and dorsal hippocampus dendritic spines in female mice.

    PubMed

    Gabor, Christopher; Lymer, Jennifer; Phan, Anna; Choleris, Elena

    2015-10-01

    Recently, oestrogen receptors (ERs) have been implicated in rapid learning processes. We have previously shown that 17β-estradiol, ERα and ERβ agonists can improve learning within 40 min of drug administration in mice. However, oestrogen action at the classical receptors may only in part explain these rapid learning effects. Chronic treatment of a G-protein coupled oestrogen receptor (GPER) agonist has been shown to affect learning and memory in ovariectomized rats, yet little is known about its rapid learning effects. Therefore we investigated whether the GPER agonist G-1 at 1 μg/kg, 6 μg/kg, 10 μg/kg, and 30 μg/kg could affect social recognition, object recognition, and object placement learning in ovariectomized CD1 mice within 40 min of drug administration. We also examined rapid effects of G-1 on CA1 hippocampal dendritic spine density and length within 40 min of drug administration, but in the absence of any learning tests. Results suggest a rapid enhancing effect of GPER activation on social recognition, object recognition and object placement learning. G-1 treatment also resulted in increased dendritic spine density in the stratum radiatum of the CA1 hippocampus. Hence GPER, along with the classical ERs, may mediate the rapid effects of oestrogen on learning and neuronal plasticity. To our knowledge, this is the first report of GPER effects occurring within a 40 min time frame. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Spatiotemporal information during unsupervised learning enhances viewpoint invariant object recognition

    PubMed Central

    Tian, Moqian; Grill-Spector, Kalanit

    2015-01-01

    Recognizing objects is difficult because it requires both linking views of an object that can be different and distinguishing objects with similar appearance. Interestingly, people can learn to recognize objects across views in an unsupervised way, without feedback, just from the natural viewing statistics. However, there is intense debate regarding what information during unsupervised learning is used to link among object views. Specifically, researchers argue whether temporal proximity, motion, or spatiotemporal continuity among object views during unsupervised learning is beneficial. Here, we untangled the role of each of these factors in unsupervised learning of novel three-dimensional (3-D) objects. We found that after unsupervised training with 24 object views spanning a 180° view space, participants showed significant improvement in their ability to recognize 3-D objects across rotation. Surprisingly, there was no advantage to unsupervised learning with spatiotemporal continuity or motion information than training with temporal proximity. However, we discovered that when participants were trained with just a third of the views spanning the same view space, unsupervised learning via spatiotemporal continuity yielded significantly better recognition performance on novel views than learning via temporal proximity. These results suggest that while it is possible to obtain view-invariant recognition just from observing many views of an object presented in temporal proximity, spatiotemporal information enhances performance by producing representations with broader view tuning than learning via temporal association. Our findings have important implications for theories of object recognition and for the development of computational algorithms that learn from examples. PMID:26024454

  11. Evaluating deep learning architectures for Speech Emotion Recognition.

    PubMed

    Fayek, Haytham M; Lech, Margaret; Cavedon, Lawrence

    2017-08-01

    Speech Emotion Recognition (SER) can be regarded as a static or dynamic classification problem, which makes SER an excellent test bed for investigating and comparing various deep learning architectures. We describe a frame-based formulation to SER that relies on minimal speech processing and end-to-end deep learning to model intra-utterance dynamics. We use the proposed SER system to empirically explore feed-forward and recurrent neural network architectures and their variants. Experiments conducted illuminate the advantages and limitations of these architectures in paralinguistic speech recognition and emotion recognition in particular. As a result of our exploration, we report state-of-the-art results on the IEMOCAP database for speaker-independent SER and present quantitative and qualitative assessments of the models' performances. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Subjective Learning Discounts Test Type: Evidence from an Associative Learning and Transfer Task

    PubMed Central

    Touron, Dayna R.; Hertzog, Christopher; Speagle, James Z.

    2011-01-01

    We evaluated the extent to which memory test format and test transfer influence the dynamics of metacognitive judgments. Participants completed 2 study-test phases for paired-associates, with or without transferring test type, in one of four conditions: (1) recognition then recall, (2) recall then recognition, (3) recognition throughout, or (4) recall throughout. Global judgments were made pre-study, post-study, and post-test for each phase; judgments of learning (JOLs) following item study were also collected. Results suggest that metacognitive judgment accuracy varies substantially by memory test type. Whereas underconfidence in JOLs and global predictions increases with recall practice (Koriat’s underconfidence-with-practice effect), underconfidence decreases with recognition practice. Moreover, performance changes when transferring test type were not fully anticipated by pre-test judgments. PMID:20178957

  13. State of the Science in Heart Failure Symptom Perception Research: An Integrative Review.

    PubMed

    Lee, Solim; Riegel, Barbara

    Heart failure (HF) is a common condition requiring self-care to maintain physical stability, prevent hospitalization, and improve quality of life. Symptom perception, a domain of HF self-care newly added to the Situation-Specific Theory of HF Self-Care, is defined as a comprehensive process of monitoring and recognizing physical sensations and interpreting and labeling the meaning of the sensations. The purpose of this integrative review was to describe the research conducted on HF symptom perception to further understanding of this new concept. A literature search was conducted using 8 databases. The search term of HF was combined with symptom, plus symptom perception subconcepts of monitoring, somatic awareness, detection, recognition, interpretation, and appraisal. Only peer-reviewed original articles published in English with full-text availability were included. No historical limits were imposed. Study subjects were adults. Twenty-one studies met the inclusion criteria. Each study was categorized into either symptom monitoring or symptom recognition and interpretation. Although daily weighing and HF-related symptom-monitoring behaviors were insufficient in HF patients, use of a symptom diary improved HF self-care, symptom distress and functional class, and decreased mortality, hospital stay, and medical costs. Most HF patients had trouble recognizing an exacerbation of symptoms. Aging, comorbid conditions, and gradual symptom progression made it difficult to recognize and correctly interpret a symptom exacerbation. Living with others, higher education, higher uncertainty, shorter symptom duration, worse functional class, and an increased number of previous hospitalizations were positively associated with symptom recognition. Existing research fails to capture all of the elements in the theoretical definition of symptom perception.

  14. Research on Speech Perception. Progress Report No. 13.

    ERIC Educational Resources Information Center

    Pisoni, David B.; And Others

    Summarizing research activities in 1987, this is the thirteenth annual report of research on speech perception, analysis, synthesis, and recognition conducted in the Speech Research Laboratory of the Department of Psychology at Indiana University. The report includes extended manuscripts, short reports, progress reports, and information on…

  15. Comparison of bimodal and bilateral cochlear implant users on speech recognition with competing talker, music perception, affective prosody discrimination, and talker identification.

    PubMed

    Cullington, Helen E; Zeng, Fan-Gang

    2011-02-01

    Despite excellent performance in speech recognition in quiet, most cochlear implant users have great difficulty with speech recognition in noise, music perception, identifying tone of voice, and discriminating different talkers. This may be partly due to the pitch coding in cochlear implant speech processing. Most current speech processing strategies use only the envelope information; the temporal fine structure is discarded. One way to improve electric pitch perception is to use residual acoustic hearing via a hearing aid on the nonimplanted ear (bimodal hearing). This study aimed to test the hypothesis that bimodal users would perform better than bilateral cochlear implant users on tasks requiring good pitch perception. Four pitch-related tasks were used. 1. Hearing in Noise Test (HINT) sentences spoken by a male talker with a competing female, male, or child talker. 2. Montreal Battery of Evaluation of Amusia. This is a music test with six subtests examining pitch, rhythm and timing perception, and musical memory. 3. Aprosodia Battery. This has five subtests evaluating aspects of affective prosody and recognition of sarcasm. 4. Talker identification using vowels spoken by 10 different talkers (three men, three women, two boys, and two girls). Bilateral cochlear implant users were chosen as the comparison group. Thirteen bimodal and 13 bilateral adult cochlear implant users were recruited; all had good speech perception in quiet. There were no significant differences between the mean scores of the bimodal and bilateral groups on any of the tests, although the bimodal group did perform better than the bilateral group on almost all tests. Performance on the different pitch-related tasks was not correlated, meaning that if a subject performed one task well they would not necessarily perform well on another. The correlation between the bimodal users' hearing threshold levels in the aided ear and their performance on these tasks was weak. Although the bimodal cochlear implant group performed better than the bilateral group on most parts of the four pitch-related tests, the differences were not statistically significant. The lack of correlation between test results shows that the tasks used are not simply providing a measure of pitch ability. Even if the bimodal users have better pitch perception, the real-world tasks used are reflecting more diverse skills than pitch. This research adds to the existing speech perception, language, and localization studies that show no significant difference between bimodal and bilateral cochlear implant users.

  16. Nurses' Experiences and Perceptions of Mobile Learning: A Survey in Beijing, China.

    PubMed

    Xiao, Qian; Sun, Aihua; Wang, Yicong; Zhang, Yan; Wu, Ying

    2018-01-01

    To explore nurses' experience and perceptions toward mobile learning, 397 nurses were investigated. All of them used mobile learning in the past one year through internet, e-books and WeChat. Smartphones were the most used mobile learning tools, followed by a tablet and laptop computer. The mean score of nurses' intention towards mobile learning was 12.1 (ranged from 7 to 15), and it related to nurses' gender, education background, expected effect, ease of operation, self-learning management and perceived interest. Nurses had positive perception toward mobile learning and enough conditions to promote nurses' mobile learning should be provided.

  17. Enrolment Purposes, Instructional Activities, and Perceptions of Attitudinal Learning in a Human Trafficking MOOC

    ERIC Educational Resources Information Center

    Watson, Sunnie Lee; Kim, Woori

    2016-01-01

    This study examines learner enrolment purposes, perceptions on instructional activities and their relationship to learning gains in a Massive Open Online Course (MOOC) for attitudinal change regarding human trafficking. Using an author-developed survey, learners reported their perceptions on instructional activities and learning gains within the…

  18. Beginning High School Teachers' Perceptions of Involvement in Professional Learning Communities and Its Impact on Teacher Retention

    ERIC Educational Resources Information Center

    Lovett, Helen Tomlinson

    2013-01-01

    The purpose of this study was to examine beginning high school teachers' perceptions of involvement in Professional Learning Communities in southeastern North Carolina and to determine whether beginning teachers' perceptions of involvement in Professional Learning Communities influenced their decisions to move to another location, stay in…

  19. Impact of Multimedia on Students' Perceptions of the Learning Environment in Mathematics Classrooms

    ERIC Educational Resources Information Center

    Chipangura, Addwell; Aldridge, Jill

    2017-01-01

    We investigated (1) whether the learning environment perceptions of students in classes frequently exposed to multimedia differed from those of students in classes that were not, (2) whether exposure to multimedia was differentially effective for males and females and (3) relationships between students' perceptions of the learning environment and…

  20. Blended Learning: The Student Viewpoint.

    PubMed

    Shantakumari, N; Sajith, P

    2015-01-01

    Blended learning (BL) is defined as "a way of meeting the challenges of tailoring learning and development to the needs of individuals by integrating the innovative and technological advances offered by online learning with the interaction and participation offered in the best of traditional learning." The Gulf Medical University (GMU), Ajman, UAE, offers a number of courses which incorporate BL with contact classes and online component on an E-learning platform. Insufficient learning satisfaction has been stated as an obstacle to its implementation and efficacy. To determine the students' perceptions toward BL which in turn will determine their satisfaction and the efficacy of the courses offered. This was a cross-sectional study conducted at the GMU, Ajman between January and December 2013. Perceptions of BL process, content, and ease of use were collected from 75 students enrolled in the certificate courses offered by the university using a questionnaire. Student perceptions were assessed using Mann-Whitney U-test and Kruskal-Wallis test on the basis of gender, age, and course enrollment. The median scores of all the questions in the three domains were above three suggesting positive perceptions on BL. The distribution of perceptions was similar between gender and age. However, significant differences were observed in the course enrollment (P = 0.02). Students hold a positive perception of the BL courses being offered in this university. The difference in perceptions among students of different courses suggest that the BL format offered needs modification according to course content to improve its perception.

  1. Deep learning and non-negative matrix factorization in recognition of mammograms

    NASA Astrophysics Data System (ADS)

    Swiderski, Bartosz; Kurek, Jaroslaw; Osowski, Stanislaw; Kruk, Michal; Barhoumi, Walid

    2017-02-01

    This paper presents novel approach to the recognition of mammograms. The analyzed mammograms represent the normal and breast cancer (benign and malignant) cases. The solution applies the deep learning technique in image recognition. To obtain increased accuracy of classification the nonnegative matrix factorization and statistical self-similarity of images are applied. The images reconstructed by using these two approaches enrich the data base and thanks to this improve of quality measures of mammogram recognition (increase of accuracy, sensitivity and specificity). The results of numerical experiments performed on large DDSM data base containing more than 10000 mammograms have confirmed good accuracy of class recognition, exceeding the best results reported in the actual publications for this data base.

  2. An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM

    NASA Astrophysics Data System (ADS)

    Wang, Juan

    2018-03-01

    The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.

  3. Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

    PubMed Central

    Kronenberger, William G.; Castellanos, Irina; Pisoni, David B.

    2017-01-01

    Purpose We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Method Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes. Results Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors. Conclusion Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants. Supplemental materials https://doi.org/10.23641/asha.5216200 PMID:28724130

  4. [Emotional facial expression recognition impairment in Parkinson disease].

    PubMed

    Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre

    2006-03-01

    some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.

  5. Two Stage Data Augmentation for Low Resourced Speech Recognition (Author’s Manuscript)

    DTIC Science & Technology

    2016-09-12

    speech recognition, deep neural networks, data augmentation 1. Introduction When training data is limited—whether it be audio or text—the obvious...Schwartz, and S. Tsakalidis, “Enhancing low resource keyword spotting with au- tomatically retrieved web documents,” in Interspeech, 2015, pp. 839–843. [2...and F. Seide, “Feature learning in deep neural networks - a study on speech recognition tasks,” in International Conference on Learning Representations

  6. Medical Students’ Perception of Their Educational Environment

    PubMed Central

    Pai, Preethi G; Menezes, Vishma; Srikanth; Subramanian, Atreya M.; Shenoy, Jnaneshwara P.

    2014-01-01

    Background: Students’ perception of the environment within which they study has shown to have a significant impact on their behavior, academic progress and sense of well-being. This study was undertaken to evaluate the students’ perception of their learning environment in an Indian medical school following traditional curricula and to study differences, if any, between the students according to the stages of medical education, i.e., the pre-clinical and clinical stages. Methodology: In the present study, the Dundee Ready Education Environment Measure (DREEM) inventory was administered to undergraduate medical students of first (n = 227), third (n = 175), fifth (n = 171) and seventh (n = 123) semesters. Scores obtained were expressed as mean ± Standard Deviation (SD) and analyzed using one-way ANOVA and Dunnett’s test. P-value < 0.05 was considered as significant. Results: The mean DREEM score for our medical school was 123/200.The first-year students were found to be more satisfied with learning environment (indicated by their higher DREEM score) compared to other semester students. Progressive decline in scores with each successive semester was observed. Evaluating the sub-domains of perception, the registrars in all semesters had a more positive perception of learning (Average mean score: 29.44), their perception of course organizers moved in the right direction (Average mean score: 26.86), their academic self-perception was more on the positive side (Average mean score: 20.14), they had a more positive perception of atmosphere (Average mean score: 29.07) and their social self-perception could be graded as not too bad (Average mean score: 17.02). Conclusion: The present study revealed that all the groups of students perceived their learning environment positively. However, a few problematic areas of learning environment were perceived such as: students were stressed more often; they felt that the course organizers were authoritarian and emphasized factual learning. Implementing more problem-based learning, student counseling and workshops on teaching-learning for educators might enable us to remedy and enrich our learning environment. PMID:24596737

  7. Exploring Students' Perceptions of Service-Learning Experiences in an Undergraduate Web Design Course

    ERIC Educational Resources Information Center

    Lee, Sang Joon; Wilder, Charlie; Yu, Chien

    2018-01-01

    Service-learning is an experiential learning experience where students learn and develop through active participation in community service to meet the needs of a community. This study explored student learning experiences in a service-learning group project and their perceptions of service-learning in an undergraduate web design course. The data…

  8. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  9. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  10. [Short-term memory characteristics of vibration intensity tactile perception on human wrist].

    PubMed

    Hao, Fei; Chen, Li-Juan; Lu, Wei; Song, Ai-Guo

    2014-12-25

    In this study, a recall experiment and a recognition experiment were designed to assess the human wrist's short-term memory characteristics of tactile perception on vibration intensity, by using a novel homemade vibrotactile display device based on the spatiotemporal combination vibration of multiple micro vibration motors as a test device. Based on the obtained experimental data, the short-term memory span, recognition accuracy and reaction time of vibration intensity were analyzed. From the experimental results, some important conclusions can be made: (1) The average short-term memory span of tactile perception on vibration intensity is 3 ± 1 items; (2) The greater difference between two adjacent discrete intensities of vibrotactile stimulation is defined, the better average short-term memory span human wrist gets; (3) There is an obvious difference of the average short-term memory span on vibration intensity between the male and female; (4) The mechanism of information extraction in short-term memory of vibrotactile display is to traverse the scanning process by comparison; (5) The recognition accuracy and reaction time performance of vibrotactile display compares unfavourably with that of visual and auditory. The results from this study are important for designing vibrotactile display coding scheme.

  11. Impaired Perception of Emotional Expression in Amyotrophic Lateral Sclerosis.

    PubMed

    Oh, Seong Il; Oh, Ki Wook; Kim, Hee Jin; Park, Jin Seok; Kim, Seung Hyun

    2016-07-01

    The increasing recognition that deficits in social emotions occur in amyotrophic lateral sclerosis (ALS) is helping to explain the spectrum of neuropsychological dysfunctions, thus supporting the view of ALS as a multisystem disorder involving neuropsychological deficits as well as motor deficits. The aim of this study was to characterize the emotion perception abilities of Korean patients with ALS based on the recognition of facial expressions. Twenty-four patients with ALS and 24 age- and sex-matched healthy controls completed neuropsychological tests and facial emotion recognition tasks [ChaeLee Korean Facial Expressions of Emotions (ChaeLee-E)]. The ChaeLee-E test includes facial expressions for seven emotions: happiness, sadness, anger, disgust, fear, surprise, and neutral. The ability to perceive facial emotions was significantly worse among ALS patients performed than among healthy controls [65.2±18.0% vs. 77.1±6.6% (mean±SD), p=0.009]. Eight of the 24 patients (33%) scored below the 5th percentile score of controls for recognizing facial emotions. Emotion perception deficits occur in Korean ALS patients, particularly regarding facial expressions of emotion. These findings expand the spectrum of cognitive and behavioral dysfunction associated with ALS into emotion processing dysfunction.

  12. Clinical evaluation of music perception, appraisal and experience in cochlear implant users

    PubMed Central

    Drennan, Ward. R.; Oleson, Jacob J.; Gfeller, Kate; Crosson, Jillian; Driscoll, Virginia D.; Won, Jong Ho; Anderson, Elizabeth S.; Rubinstein, Jay T.

    2014-01-01

    Objectives The objectives were to evaluate the relationships among music perception, appraisal, and experience in cochlear implant users in multiple clinical settings and to examine the viability of two assessments designed for clinical use. Design Background questionnaires (IMBQ) were administered by audiologists in 14 clinics in the United States and Canada. The CAMP included tests of pitch-direction discrimination, and melody and timbre recognition. The IMBQ queried users on prior musical involvement, music listening habits pre and post implant, and music appraisals. Study sample One-hundred forty-five users of Advanced Bionics and Cochlear Ltd cochlear implants. Results Performance on pitch direction discrimination, melody recognition, and timbre recognition tests were consistent with previous studies with smaller cohorts, as well as with more extensive protocols conducted in other centers. Relationships between perceptual accuracy and music enjoyment were weak, suggesting that perception and appraisal are relatively independent for CI users. Conclusions Perceptual abilities as measured by the CAMP had little to no relationship with music appraisals and little relationship with musical experience. The CAMP and IMBQ are feasible for routine clinical use, providing results consistent with previous thorough laboratory-based investigations. PMID:25177899

  13. Binary ROCs in Perception and Recognition Memory Are Curved

    ERIC Educational Resources Information Center

    Dube, Chad; Rotello, Caren M.

    2012-01-01

    In recognition memory, a classic finding is that receiver operating characteristics (ROCs) are curvilinear. This has been taken to support the fundamental assumptions of signal detection theory (SDT) over discrete-state models such as the double high-threshold model (2HTM), which predicts linear ROCs. Recently, however, Broder and Schutz (2009)…

  14. Researching the Use of Voice Recognition Writing Software.

    ERIC Educational Resources Information Center

    Honeycutt, Lee

    2003-01-01

    Notes that voice recognition technology (VRT) has become accurate and fast enough to be useful in a variety of writing scenarios. Contends that little is known about how this technology might affect writing process or perceptions of silent writing. Explores future use of VRT by examining past research in the technology of dictation. (PM)

  15. Student Perceptions and Attitudes about Community Service-Learning in the Teacher Training Curriculum

    ERIC Educational Resources Information Center

    Bender, Gerda; Jordaan, Rene

    2007-01-01

    Much of the international research on Community Service-Learning has investigated the benefits, outcomes, and learning experiences of students already engaged in service-learning projects and programmes. As there is scant research on students' attitudes to and perceptions of Service-Learning, before this learning became integrated into an academic…

  16. Pitch and Plasticity: Insights from the Pitch Matching of Chords by Musicians with Absolute and Relative Pitch

    PubMed Central

    McLachlan, Neil M.; Marco, David J. T.; Wilson, Sarah J.

    2013-01-01

    Absolute pitch (AP) is a form of sound recognition in which musical note names are associated with discrete musical pitch categories. The accuracy of pitch matching by non-AP musicians for chords has recently been shown to depend on stimulus familiarity, pointing to a role of spectral recognition mechanisms in the early stages of pitch processing. Here we show that pitch matching accuracy by AP musicians was also dependent on their familiarity with the chord stimulus. This suggests that the pitch matching abilities of both AP and non-AP musicians for concurrently presented pitches are dependent on initial recognition of the chord. The dual mechanism model of pitch perception previously proposed by the authors suggests that spectral processing associated with sound recognition primes waveform processing to extract stimulus periodicity and refine pitch perception. The findings presented in this paper are consistent with the dual mechanism model of pitch, and in the case of AP musicians, the formation of nominal pitch categories based on both spectral and periodicity information. PMID:24961624

  17. An in-depth cognitive examination of individuals with superior face recognition skills.

    PubMed

    Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah

    2016-09-01

    Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Dental hygiene students' perceptions of distance learning: do they change over time?

    PubMed

    Sledge, Rhonda; Vuk, Jasna; Long, Susan

    2014-02-01

    The University of Arkansas for Medical Sciences dental hygiene program established a distant site where the didactic curriculum was broadcast via interactive video from the main campus to the distant site, supplemented with on-line learning via Blackboard. This study compared the perceptions of students towards distance learning as they progressed through the 21 month curriculum. Specifically, the study sought to answer the following questions: Is there a difference in the initial perceptions of students on the main campus and at the distant site toward distance learning? Do students' perceptions change over time with exposure to synchronous distance learning over the course of the curriculum? All 39 subjects were women between the ages of 20 and 35 years. Of the 39 subjects, 37 were Caucasian and 2 were African-American. A 15-question Likert scale survey was administered at 4 different periods during the 21 month program to compare changes in perceptions toward distance learning as students progressed through the program. An independent sample t-test and ANOVA were utilized for statistical analysis. At the beginning of the program, independent samples t-test revealed that students at the main campus (n=34) perceived statistically significantly higher effectiveness of distance learning than students at the distant site (n=5). Repeated measures of ANOVA revealed that perceptions of students at the main campus on effectiveness and advantages of distance learning statistically significantly decreased whereas perceptions of students at distant site statistically significantly increased over time. Distance learning in the dental hygiene program was discussed, and replication of the study with larger samples of students was recommended.

  19. Transfer Learning for Activity Recognition: A Survey

    PubMed Central

    Cook, Diane; Feuz, Kyle D.; Krishnan, Narayanan C.

    2013-01-01

    Many intelligent systems that focus on the needs of a human require information about the activities being performed by the human. At the core of this capability is activity recognition, which is a challenging and well-researched problem. Activity recognition algorithms require substantial amounts of labeled training data yet need to perform well under very diverse circumstances. As a result, researchers have been designing methods to identify and utilize subtle connections between activity recognition datasets, or to perform transfer-based activity recognition. In this paper we survey the literature to highlight recent advances in transfer learning for activity recognition. We characterize existing approaches to transfer-based activity recognition by sensor modality, by differences between source and target environments, by data availability, and by type of information that is transferred. Finally, we present some grand challenges for the community to consider as this field is further developed. PMID:24039326

  20. [Neurophysiology and neuropsychology of recognition confabulation in hospitalized schizophrenic patients].

    PubMed

    Salazar Fraile, J; Tabarés Seisdedos, R; Selva Vera, G; Balanzá Martínez, V; Leal Cercós, C; Vilela Soler, C; Vallet Mas, M

    1998-01-01

    Recognition confabulation was studied in 16 schizoprhenic patients and 16 normal controls. Half of the schizophrenics presented recognition confabulation, while the remaining 8 and 16 controls did not. This type of confabulation was associated to attentional deficiency, difficulties in perceptual follow-up and perceptive changes. These test satisfactorily discriminated confabulating schizoprhenics and both ill and healthy non-confabulating subjects. The possible mechanisms underlying this type of confabulation are discussed, in relation to the deficiences observed.

  1. Assistive Technology and Adults with Learning Disabilities: A Blueprint for Exploration and Advancement.

    ERIC Educational Resources Information Center

    Raskind, Marshall

    1993-01-01

    This article describes assistive technologies for persons with learning disabilities, including word processing, spell checking, proofreading programs, outlining/"brainstorming" programs, abbreviation expanders, speech recognition, speech synthesis/screen review, optical character recognition systems, personal data managers, free-form databases,…

  2. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  3. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  4. Autonomous learning in gesture recognition by using lobe component analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  5. Students' Perceptions of Vocabulary Knowledge and Learning in a Middle School Science Classroom

    ERIC Educational Resources Information Center

    Brown, Patrick L.; Concannon, James P.

    2016-01-01

    This study investigated eighth-grade science students' (13-14-year-olds) perceptions of their vocabulary knowledge, learning, and content achievement. Data sources included pre- and posttest of students' perceptions of vocabulary knowledge, students' perceptions of vocabulary and reading strategies surveys, and a content achievement test.…

  6. Social Work Students' Perceptions of Team-Based Learning

    ERIC Educational Resources Information Center

    Macke, Caroline; Taylor, Jessica Averitt; Taylor, James E.; Tapp, Karen; Canfield, James

    2015-01-01

    This study sought to examine social work students' perceptions of Team-Based Learning (N = 154). Aside from looking at overall student perceptions, comparative analyses examined differences in perceptions between BSW and MSW students, and between Caucasian students and students of color. Findings for the overall sample revealed favorable…

  7. Social and Emotional Learning and Teacher-Student Relationships: Preschool Teachers' and Students' Perceptions

    ERIC Educational Resources Information Center

    Poulou, Maria S.

    2017-01-01

    The study aimed to investigate how teachers' perceptions of emotional intelligence, and social and emotional learning (SEL) relate to teacher-student relationships. Teachers' perceptions of teacher-student relationships and the degree of agreement with students' perceptions was also investigated. Preschool teachers from 92 public schools in…

  8. An FMRI Study of Olfactory Cues to Perception of Conspecific Stress

    DTIC Science & Technology

    2010-04-01

    modulate recognition of fear in ambiguous facial expressions. Psychol Sci 20: 177-183. 23. Pause BM, Ohrt A, Prehn A, Ferstl R (2004) Positive...emotional priming of facial affect perception in females is diminished by chemosensory anxiety signals. Chem Senses 29: 797-805. 24. Prehn A, Ohrt A

  9. Object Recognition and Random Image Structure Evolution

    ERIC Educational Resources Information Center

    Sadr, Jvid; Sinha, Pawan

    2004-01-01

    We present a technique called Random Image Structure Evolution (RISE) for use in experimental investigations of high-level visual perception. Potential applications of RISE include the quantitative measurement of perceptual hysteresis and priming, the study of the neural substrates of object perception, and the assessment and detection of subtle…

  10. Alignment between Informal Educator Perceptions and Audience Expectations of Climate Change Education

    ERIC Educational Resources Information Center

    Stylinski, Cathlyn; Heimlich, Joe; Palmquist, Sasha; Wasserman, Deborah; Youngs, Renae

    2017-01-01

    To understand the complexities of climate change on educator-visitor relationships, we compared educators' perceptions with audiences' expectations for informal science education institutions. Our findings suggest two disconnects: (a) a professional recognition that climate change education is related to institutional mission but a lack of…

  11. Perception of Biological Motion in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Freitag, Christine M.; Konrad, Carsten; Haberlen, Melanie; Kleser, Christina; von Gontard, Alexander; Reith, Wolfgang; Troje, Nikolaus F.; Krick, Christoph

    2008-01-01

    In individuals with autism or autism-spectrum-disorder (ASD), conflicting results have been reported regarding the processing of biological motion tasks. As biological motion perception and recognition might be related to impaired imitation, gross motor skills and autism specific psychopathology in individuals with ASD, we performed a functional…

  12. Educators' Perceptions on Bullying Prevention Strategies

    ERIC Educational Resources Information Center

    de Wet, Corene

    2017-01-01

    I report on an investigation into a group of Free State educators' recognition of bullying, their reactions to incidences of bullying, and their perceptions of the effectiveness of a number of bullying prevention strategies. The research instrument was a synthesis of the Delaware Research Questionnaire and questions based on findings from previous…

  13. Is Sweet Taste Perception Associated with Sweet Food Liking and Intake?

    PubMed Central

    Jayasinghe, Shakeela N.; Kruger, Rozanne; Walsh, Daniel C. I.; Cao, Guojiao; Rivers, Stacey; Richter, Marilize; Breier, Bernhard H.

    2017-01-01

    A range of psychophysical taste measurements are used to characterize an individual’s sweet taste perception and to assess links between taste perception and dietary intake. The aims of this study were to investigate the relationship between four different psychophysical measurements of sweet taste perception, and to explore which measures of sweet taste perception relate to sweet food intake. Forty-four women aged 20–40 years were recruited for the study. Four measures of sweet taste perception (detection and recognition thresholds, and sweet taste intensity and hedonic liking of suprathreshold concentrations) were assessed using glucose as the tastant. Dietary measurements included a four-day weighed food record, a sweet food-food frequency questionnaire and a sweet beverage liking questionnaire. Glucose detection and recognition thresholds showed no correlation with suprathreshold taste measurements or any dietary intake measurement. Importantly, sweet taste intensity correlated negatively with total energy and carbohydrate (starch, total sugar, fructose, glucose) intakes, frequency of sweet food intake and sweet beverage liking. Furthermore, sweet hedonic liking correlated positively with total energy and carbohydrate (total sugar, fructose, glucose) intakes. The present study shows a clear link between sweet taste intensity and hedonic liking with sweet food liking, and total energy, carbohydrate and sugar intake. PMID:28708085

  14. Impaired perception of harmonic complexity in congenital amusia: a case study.

    PubMed

    Reed, Catherine L; Cahn, Steven J; Cory, Christopher; Szaflarski, Jerzy P

    2011-07-01

    This study investigates whether congenital amusia (an inability to perceive music from birth) also impairs the perception of musical qualities that do not rely on fine-grained pitch discrimination. We established that G.G. (64-year-old male, age-typical hearing) met the criteria of congenital amusia and demonstrated music-specific deficits (e.g., language processing, intonation, prosody, fine-grained pitch processing, pitch discrimination, identification of discrepant tones and direction of pitch for tones in a series, pitch discrimination within scale segments, predictability of tone sequences, recognition versus knowing memory for melodies, and short-term memory for melodies). Next, we conducted tests of tonal fusion, harmonic complexity, and affect perception: recognizing timbre, assessing consonance and dissonance, and recognizing musical affect from harmony. G.G. displayed relatively unimpaired perception and production of environmental sounds, prosody, and emotion conveyed by speech compared with impaired fine-grained pitch perception, tonal sequence discrimination, and melody recognition. Importantly, G.G. could not perform tests of tonal fusion that do not rely on pitch discrimination: He could not distinguish concurrent notes, timbre, consonance/dissonance, simultaneous notes, and musical affect. Results indicate at least three distinct problems-one with pitch discrimination, one with harmonic simultaneity, and one with musical affect-and each has distinct consequences for music perception.

  15. Recognition and Validation of Non Formal and Informal Learning: Lifelong Learning and University in the Italian Context

    ERIC Educational Resources Information Center

    Di Rienzo, Paolo

    2014-01-01

    This paper is a reflection, on the basis of empirical research conducted in Italy, on theoretical, methodological and systemic-organisational aspects linked to the recognition and validation of the prior learning acquired by adult learners or workers who decide to enrol at university at a later stage in their lives. The interest in this research…

  16. Recognition of Tacit Skills: Sustaining Learning Outcomes in Adult Learning and Work Re-Entry

    ERIC Educational Resources Information Center

    Evans, Karen; Kersh, Natasha; Kontiainen, Seppo

    2004-01-01

    This paper is based on the project "Recognition of Tacit Skills and Knowledge in Work Re-entry" carried out as a part of the ESRC-funded Research Network "Improving Incentives to Learning in the Workplace". The network aims to contribute to improved practice among a wide range of practitioners. The study has investigated the part played by tacit…

  17. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    PubMed

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  18. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    PubMed Central

    Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang

    2018-01-01

    The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407

  19. Transfer learning for bimodal biometrics recognition

    NASA Astrophysics Data System (ADS)

    Dan, Zhiping; Sun, Shuifa; Chen, Yanfei; Gan, Haitao

    2013-10-01

    Biometrics recognition aims to identify and predict new personal identities based on their existing knowledge. As the use of multiple biometric traits of the individual may enables more information to be used for recognition, it has been proved that multi-biometrics can produce higher accuracy than single biometrics. However, a common problem with traditional machine learning is that the training and test data should be in the same feature space, and have the same underlying distribution. If the distributions and features are different between training and future data, the model performance often drops. In this paper, we propose a transfer learning method for face recognition on bimodal biometrics. The training and test samples of bimodal biometric images are composed of the visible light face images and the infrared face images. Our algorithm transfers the knowledge across feature spaces, relaxing the assumption of same feature space as well as same underlying distribution by automatically learning a mapping between two different but somewhat similar face images. According to the experiments in the face images, the results show that the accuracy of face recognition has been greatly improved by the proposed method compared with the other previous methods. It demonstrates the effectiveness and robustness of our method.

  20. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  1. Sticking with the nice guy: trait warmth information impairs learning and modulates person perception brain network activity.

    PubMed

    Lee, Victoria K; Harris, Lasana T

    2014-12-01

    Social learning requires inferring social information about another person, as well as evaluating outcomes. Previous research shows that prior social information biases decision making and reduces reliance on striatal activity during learning (Delgado, Frank, & Phelps, Nature Neuroscience 8 (11): 1611-1618, 2005). A rich literature in social psychology on person perception demonstrates that people spontaneously infer social information when viewing another person (Fiske & Taylor, 2013) and engage a network of brain regions, including the medial prefrontal cortex, temporal parietal junction, superior temporal sulcus, and precuneus (Amodio & Frith, Nature Reviews Neuroscience, 7(4), 268-277, 2006; Haxby, Gobbini, & Montgomery, 2004; van Overwalle Human Brain Mapping, 30, 829-858, 2009). We investigate the role of these brain regions during social learning about well-established dimensions of person perception-trait warmth and trait competence. We test the hypothesis that activity in person perception brain regions interacts with learning structures during social learning. Participants play an investment game where they must choose an agent to invest on their behalf. This choice is guided by cues signaling trait warmth or trait competence based on framing of monetary returns. Trait warmth information impairs learning about human but not computer agents, while trait competence information produces similar learning rates for human and computer agents. We see increased activation to warmth information about human agents in person perception brain regions. Interestingly, activity in person perception brain regions during the decision phase negatively predicts activity in the striatum during feedback for trait competence inferences about humans. These results suggest that social learning may engage additional processing within person perception brain regions that hampers learning in economic contexts.

  2. Emotional memory and perception in temporal lobectomy patients with amygdala damage.

    PubMed

    Brierley, B; Medford, N; Shaw, P; David, A S

    2004-04-01

    The human amygdala is implicated in the formation of emotional memories and the perception of emotional stimuli--particularly fear--across various modalities. To discern the extent to which these functions are related. 28 patients who had anterior temporal lobectomy (13 left and 15 right) for intractable epilepsy were recruited. Structural magnetic resonance imaging showed that three of them had atrophy of their remaining amygdala. All participants were given tests of affect perception from facial and vocal expressions and of emotional memory, using a standard narrative test and a novel test of word recognition. The results were standardised against matched healthy controls. Performance on all emotion tasks in patients with unilateral lobectomy ranged from unimpaired to moderately impaired. Perception of emotions in faces and voices was (with exceptions) significantly positively correlated, indicating multimodal emotional processing. However, there was no correlation between the subjects' performance on tests of emotional memory and perception. Several subjects showed strong emotional memory enhancement but poor fear perception. Patients with bilateral amygdala damage had greater impairment, particularly on the narrative test of emotional memory, one showing superior fear recognition but absent memory enhancement. Bilateral amygdala damage is particularly disruptive of emotional memory processes in comparison with unilateral temporal lobectomy. On a cognitive level, the pattern of results implies that perception of emotional expressions and emotional memory are supported by separate processing systems or streams.

  3. Recognising the forest, but not the trees: an effect of colour on scene perception and recognition.

    PubMed

    Nijboer, Tanja C W; Kanai, Ryota; de Haan, Edward H F; van der Smagt, Maarten J

    2008-09-01

    Colour has been shown to facilitate the recognition of scene images, but only when these images contain natural scenes, for which colour is 'diagnostic'. Here we investigate whether colour can also facilitate memory for scene images, and whether this would hold for natural scenes in particular. In the first experiment participants first studied a set of colour and greyscale natural and man-made scene images. Next, the same images were presented, randomly mixed with a different set. Participants were asked to indicate whether they had seen the images during the study phase. Surprisingly, performance was better for greyscale than for coloured images, and this difference is due to the higher false alarm rate for both natural and man-made coloured scenes. We hypothesized that this increase in false alarm rate was due to a shift from scrutinizing details of the image to recognition of the gist of the (coloured) image. A second experiment, utilizing images without a nameable gist, confirmed this hypothesis as participants now performed equally on greyscale and coloured images. In the final experiment we specifically targeted the more detail-based perception and recognition for greyscale images versus the more gist-based perception and recognition for coloured images with a change detection paradigm. The results show that changes to images are detected faster when image-pairs were presented in greyscale than in colour. This counterintuitive result held for both natural and man-made scenes (but not for scenes without nameable gist) and thus corroborates the shift from more detailed processing of images in greyscale to more gist-based processing of coloured images.

  4. Students' Perceptions of Life Skill Development in Project-Based Learning Schools

    ERIC Educational Resources Information Center

    Meyer, Kimberly; Wurdinger, Scott

    2016-01-01

    This research aimed to examine students' perceptions of their life skills while attending project-based learning (PBL) schools. The study focused on three questions including: (1) What are students' perceptions of their development of life skills in project-based learning schools?; (2) In what ways, if any, do students perceive an increase in…

  5. Fusing a Reversed and Informal Learning Scheme and Space: Student Perceptions of Active Learning in Physical Chemistry

    ERIC Educational Resources Information Center

    Donnelly, Julie; Hernández, Florencio E.

    2018-01-01

    Physical chemistry students often have negative perceptions and low expectations for success in physical chemistry, attitudes that likely affect their performance in the course. Despite the results of several studies indicating increased positive perception of physical chemistry when active learning strategies are used, a recent survey of faculty…

  6. Examine Middle School Students' Constructivist Environment Perceptions in Turkey: School Location and Class Size

    ERIC Educational Resources Information Center

    Yigit, Nevzat; Alpaslan, Muhammet Mustafa; Cinemre, Yasin; Balcin, Bilal

    2017-01-01

    This study aims to examine the middle school students' perceptions of the classroom learning environment in the science course in Turkey in terms of school location and class size. In the study the Assessing of Constructivist Learning Environment (ACLE) questionnaire was utilized to map students' perceptions of the classroom learning environment.…

  7. Disentangling beat perception from sequential learning and examining the influence of attention and musical abilities on ERP responses to rhythm.

    PubMed

    Bouwer, Fleur L; Werner, Carola M; Knetemann, Myrthe; Honing, Henkjan

    2016-05-01

    Beat perception is the ability to perceive temporal regularity in musical rhythm. When a beat is perceived, predictions about upcoming events can be generated. These predictions can influence processing of subsequent rhythmic events. However, statistical learning of the order of sounds in a sequence can also affect processing of rhythmic events and must be differentiated from beat perception. In the current study, using EEG, we examined the effects of attention and musical abilities on beat perception. To ensure we measured beat perception and not absolute perception of temporal intervals, we used alternating loud and soft tones to create a rhythm with two hierarchical metrical levels. To control for sequential learning of the order of the different sounds, we used temporally regular (isochronous) and jittered rhythmic sequences. The order of sounds was identical in both conditions, but only the regular condition allowed for the perception of a beat. Unexpected intensity decrements were introduced on the beat and offbeat. In the regular condition, both beat perception and sequential learning were expected to enhance detection of these deviants on the beat. In the jittered condition, only sequential learning was expected to affect processing of the deviants. ERP responses to deviants were larger on the beat than offbeat in both conditions. Importantly, this difference was larger in the regular condition than in the jittered condition, suggesting that beat perception influenced responses to rhythmic events in addition to sequential learning. The influence of beat perception was present both with and without attention directed at the rhythm. Moreover, beat perception as measured with ERPs correlated with musical abilities, but only when attention was directed at the stimuli. Our study shows that beat perception is possible when attention is not directed at a rhythm. In addition, our results suggest that attention may mediate the influence of musical abilities on beat perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Social Perception in Learning Disabled Adolescents.

    ERIC Educational Resources Information Center

    Axelrod, Lee

    1982-01-01

    Nonverbal social perception in 54 learning disabled adolescents was investigated using standardized tests of social intelligence and nonverbal communication. LD adolescents (grades 8 and 9) were significantly lower in nonverbal social perception skill than controls. (Author/CL)

  9. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    PubMed

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.

  10. Student Perceptions of E-Learning Environments, Self-Regulated Learning and Academic Performance

    ERIC Educational Resources Information Center

    Covington, Keisha Casan Danielle

    2012-01-01

    Student perceptions of e-learning are potential causes of student dropout in online education. The social cognitive theoretical view was used to investigate the relationship between perceived e-learning environments, self-regulated learning (SRL), and academic performance in online education. This mixed methods study used a quantitative…

  11. Posture-based processing in visual short-term memory for actions.

    PubMed

    Vicary, Staci A; Stevens, Catherine J

    2014-01-01

    Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.

  12. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    PubMed

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  13. It doesn't matter what you say: FMRI correlates of voice learning and recognition independent of speech content.

    PubMed

    Zäske, Romi; Awwad Shiekh Hasan, Bashar; Belin, Pascal

    2017-09-01

    Listeners can recognize newly learned voices from previously unheard utterances, suggesting the acquisition of high-level speech-invariant voice representations during learning. Using functional magnetic resonance imaging (fMRI) we investigated the anatomical basis underlying the acquisition of voice representations for unfamiliar speakers independent of speech, and their subsequent recognition among novel voices. Specifically, listeners studied voices of unfamiliar speakers uttering short sentences and subsequently classified studied and novel voices as "old" or "new" in a recognition test. To investigate "pure" voice learning, i.e., independent of sentence meaning, we presented German sentence stimuli to non-German speaking listeners. To disentangle stimulus-invariant and stimulus-dependent learning, during the test phase we contrasted a "same sentence" condition in which listeners heard speakers repeating the sentences from the preceding study phase, with a "different sentence" condition. Voice recognition performance was above chance in both conditions although, as expected, performance was higher for same than for different sentences. During study phases activity in the left inferior frontal gyrus (IFG) was related to subsequent voice recognition performance and same versus different sentence condition, suggesting an involvement of the left IFG in the interactive processing of speaker and speech information during learning. Importantly, at test reduced activation for voices correctly classified as "old" compared to "new" emerged in a network of brain areas including temporal voice areas (TVAs) of the right posterior superior temporal gyrus (pSTG), as well as the right inferior/middle frontal gyrus (IFG/MFG), the right medial frontal gyrus, and the left caudate. This effect of voice novelty did not interact with sentence condition, suggesting a role of temporal voice-selective areas and extra-temporal areas in the explicit recognition of learned voice identity, independent of speech content. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Robust Radar Emitter Recognition Based on the Three-Dimensional Distribution Feature and Transfer Learning

    PubMed Central

    Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam

    2016-01-01

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches. PMID:26927111

  15. Robust Radar Emitter Recognition Based on the Three-Dimensional Distribution Feature and Transfer Learning.

    PubMed

    Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam

    2016-02-25

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches.

  16. Perception Evolution Network Based on Cognition Deepening Model--Adapting to the Emergence of New Sensory Receptor.

    PubMed

    Xing, Youlu; Shen, Furao; Zhao, Jinxi

    2016-03-01

    The proposed perception evolution network (PEN) is a biologically inspired neural network model for unsupervised learning and online incremental learning. It is able to automatically learn suitable prototypes from learning data in an incremental way, and it does not require the predefined prototype number or the predefined similarity threshold. Meanwhile, being more advanced than the existing unsupervised neural network model, PEN permits the emergence of a new dimension of perception in the perception field of the network. When a new dimension of perception is introduced, PEN is able to integrate the new dimensional sensory inputs with the learned prototypes, i.e., the prototypes are mapped to a high-dimensional space, which consists of both the original dimension and the new dimension of the sensory inputs. In the experiment, artificial data and real-world data are used to test the proposed PEN, and the results show that PEN can work effectively.

  17. Learning by Sorting

    ERIC Educational Resources Information Center

    Lovrencic, Michael; Vena, Laurie

    2014-01-01

    A kinesthetic technique for learning to recognize elements and compounds is presented in this article. The current common pedagogy appears to merge recognition and implementation into one naming method. A separate recognition skill is critical to students being able to correctly name and write the formulas of compounds. This article focuses on…

  18. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  19. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of ).

    PubMed

    Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F

    Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.

  20. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children with Normal Hearing: A Replication and Extension of Eisenberg et al., 2002

    PubMed Central

    Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.

    2016-01-01

    Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787

  1. Deficits in social perception in opioid maintenance patients, abstinent opioid users and non-opioid users.

    PubMed

    McDonald, Skye; Darke, Shane; Kaye, Sharlene; Torok, Michelle

    2013-03-01

    This study aimed to compare emotion perception and social inference in opioid maintenance patients with abstinent ex-users and non-heroin-using controls, and determine whether any deficits in could be accounted for by cognitive deficits and/or risk factors for brain damage. Case-control. Sydney, Australia. A total of 125 maintenance patients (MAIN), 50 abstinent opiate users (ABST) and 50 matched controls (CON). The Awareness of Social Inference Test (TASIT) was used to measure emotion perception and social inference. Measures were also taken of executive function, working memory, information processing speed, verbal/non-verbal learning and psychological distress. After adjusting for age, sex, pre-morbid IQ and psychological distress, the MAIN group was impaired relative to CON (β = -0.19, P < 0.05) and ABST (β = -0.19, P < 0.05) on emotion perception and relative to CON (β = -0.25, P < 0.001) and ABST (β = -0.24, P < 0.01) on social inference. In neither case did the CON and ABST groups differ. For both emotion perception (P < 0.001) and social inference (P < 0.001), pre-morbid IQ was a significant independent predictor. Cognitive function was a major predictor of poor emotion perception (β = -0.44, P < 0.001) and social inference (β = -0.48, P < 0.001). Poor emotion recognition was also predicted by number of heroin overdoses (β = -0.14, P < 0.05). Neither time in treatment or type of maintenance medication (methadone or buprenorphine) were related to performance. People in opioid maintenance treatment may have an impaired capacity for emotion perception and ability to make inferences about social situations. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.

  2. Nigerian Physiotherapy Clinical Students' Perception of Their Learning Environment Measured by the Dundee Ready Education Environment Measure Inventory

    ERIC Educational Resources Information Center

    Odole, Adesola C.; Oyewole, Olufemi O.; Ogunmola, Oluwasolape T.

    2014-01-01

    The identification of the learning environment and the understanding of how students learn will help teacher to facilitate learning and plan a curriculum to achieve the learning outcomes. The purpose of this study was to investigate undergraduate physiotherapy clinical students' perception of University of Ibadan's learning environment. Using the…

  3. Distance Learning Students' Evaluation of E-Learning System in University of Tabuk, Saudi Arabia

    ERIC Educational Resources Information Center

    Al-Juda, Mefleh Qublan B.

    2017-01-01

    This study evaluates the experiences and perceptions of students regarding e-learning systems and their preparedness for e-learning. It also investigates the overall perceptions of students regarding e-learning and the factors influencing students' attitudes towards e-learning. The study uses convenience sampling in which students of the Education…

  4. Somatosensory Representations Link the Perception of Emotional Expressions and Sensory Experience.

    PubMed

    Kragel, Philip A; LaBar, Kevin S

    2016-01-01

    Studies of human emotion perception have linked a distributed set of brain regions to the recognition of emotion in facial, vocal, and body expressions. In particular, lesions to somatosensory cortex in the right hemisphere have been shown to impair recognition of facial and vocal expressions of emotion. Although these findings suggest that somatosensory cortex represents body states associated with distinct emotions, such as a furrowed brow or gaping jaw, functional evidence directly linking somatosensory activity and subjective experience during emotion perception is critically lacking. Using functional magnetic resonance imaging and multivariate decoding techniques, we show that perceiving vocal and facial expressions of emotion yields hemodynamic activity in right somatosensory cortex that discriminates among emotion categories, exhibits somatotopic organization, and tracks self-reported sensory experience. The findings both support embodied accounts of emotion and provide mechanistic insight into how emotional expressions are capable of biasing subjective experience in those who perceive them.

  5. Social cognition intervention in schizophrenia: Description of the training of affect recognition program - Indian version.

    PubMed

    Thonse, Umesh; Behere, Rishikesh V; Frommann, Nicole; Sharma, Psvn

    2018-01-01

    Social cognition refers to mental operations involved in processing of social cues and includes the domains of emotion processing, Theory of Mind (ToM), social perception, social knowledge and attributional bias. Significant deficits in ToM, emotion perception and social perception have been demonstrated in schizophrenia which can have an impact on socio-occupational functioning. Intervention modules for social cognition have demonstrated moderate effect sizes for improving emotion identification and discrimination. We describe the Indian version of the Training of Affect Recognition (TAR) program and a pilot study to demonstrate the feasibility of administering this intervention program in the Indian population. We also discuss the cultural sensibilities in adopting an intervention program for the Indian setting. To the best of our knowledge this is the first intervention program for social cognition for use in persons with schizophrenia in India. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Cortical visual dysfunction in children: a clinical study.

    PubMed

    Dutton, G; Ballantyne, J; Boyd, G; Bradnam, M; Day, R; McCulloch, D; Mackie, R; Phillips, S; Saunders, K

    1996-01-01

    Damage to the cerebral cortex was responsible for impairment in vision in 90 of 130 consecutive children referred to the Vision Assessment Clinic in Glasgow. Cortical blindness was seen in 16 children. Only 2 were mobile, but both showed evidence of navigational blind-sight. Cortical visual impairment, in which it was possible to estimate visual acuity but generalised severe brain damage precluded estimation of cognitive visual function, was observed in 9 children. Complex disorders of cognitive vision were seen in 20 children. These could be divided into five categories and involved impairment of: (1) recognition, (2) orientation, (3) depth perception, (4) perception of movement and (5) simultaneous perception. These disorders were observed in a variety of combinations. The remaining children showed evidence of reduced visual acuity and/ or visual field loss, but without detectable disorders of congnitive visual function. Early recognition of disorders of cognitive vision is required if active training and remediation are to be implemented.

  7. Somatosensory Representations Link the Perception of Emotional Expressions and Sensory Experience123

    PubMed Central

    2016-01-01

    Abstract Studies of human emotion perception have linked a distributed set of brain regions to the recognition of emotion in facial, vocal, and body expressions. In particular, lesions to somatosensory cortex in the right hemisphere have been shown to impair recognition of facial and vocal expressions of emotion. Although these findings suggest that somatosensory cortex represents body states associated with distinct emotions, such as a furrowed brow or gaping jaw, functional evidence directly linking somatosensory activity and subjective experience during emotion perception is critically lacking. Using functional magnetic resonance imaging and multivariate decoding techniques, we show that perceiving vocal and facial expressions of emotion yields hemodynamic activity in right somatosensory cortex that discriminates among emotion categories, exhibits somatotopic organization, and tracks self-reported sensory experience. The findings both support embodied accounts of emotion and provide mechanistic insight into how emotional expressions are capable of biasing subjective experience in those who perceive them. PMID:27280154

  8. Learning pattern recognition and decision making in the insect brain

    NASA Astrophysics Data System (ADS)

    Huerta, R.

    2013-01-01

    We revise the current model of learning pattern recognition in the Mushroom Bodies of the insects using current experimental knowledge about the location of learning, olfactory coding and connectivity. We show that it is possible to have an efficient pattern recognition device based on the architecture of the Mushroom Bodies, sparse code, mutual inhibition and Hebbian leaning only in the connections from the Kenyon cells to the output neurons. We also show that despite the conventional wisdom that believes that artificial neural networks are the bioinspired model of the brain, the Mushroom Bodies actually resemble very closely Support Vector Machines (SVMs). The derived SVM learning rules are situated in the Mushroom Bodies, are nearly identical to standard Hebbian rules, and require inhibition in the output. A very particular prediction of the model is that random elimination of the Kenyon cells in the Mushroom Bodies do not impair the ability to recognize odorants previously learned.

  9. Verbal learning on depressive pseudodementia: accentuate impairment of free recall, moderate on learning processes, and spared short-term and recognition memory.

    PubMed

    Paula, Jonas Jardim de; Miranda, Débora Marques; Nicolato, Rodrigo; Moraes, Edgar Nunes de; Bicalho, Maria Aparecida Camargos; Malloy-Diniz, Leandro Fernandes

    2013-09-01

    Depressive pseudodementia (DPD) is a clinical condition characterized by depressive symptoms followed by cognitive and functional impairment characteristics of dementia. Memory complaints are one of the most related cognitive symptoms in DPD. The present study aims to assess the verbal learning profile of elderly patients with DPD. Ninety-six older adults (34 DPD and 62 controls) were assessed by neuropsychological tests including the Rey auditory-verbal learning test (RAVLT). A multivariate general linear model was used to assess group differences and controlled for demographic factors. Moderate or large effects were found on all RAVLT components, except for short-term and recognition memory. DPD impairs verbal memory, with large effect size on free recall and moderate effect size on the learning. Short-term storage and recognition memory are useful in clinical contexts when the differential diagnosis is required.

  10. Blended Learning: The Student Viewpoint

    PubMed Central

    Shantakumari, N; Sajith, P

    2015-01-01

    Background: Blended learning (BL) is defined as “a way of meeting the challenges of tailoring learning and development to the needs of individuals by integrating the innovative and technological advances offered by online learning with the interaction and participation offered in the best of traditional learning.” The Gulf Medical University (GMU), Ajman, UAE, offers a number of courses which incorporate BL with contact classes and online component on an E-learning platform. Insufficient learning satisfaction has been stated as an obstacle to its implementation and efficacy. Aim: To determine the students’ perceptions toward BL which in turn will determine their satisfaction and the efficacy of the courses offered. Subjects and Methods: This was a cross-sectional study conducted at the GMU, Ajman between January and December 2013. Perceptions of BL process, content, and ease of use were collected from 75 students enrolled in the certificate courses offered by the university using a questionnaire. Student perceptions were assessed using Mann–Whitney U-test and Kruskal–Wallis test on the basis of gender, age, and course enrollment. Results: The median scores of all the questions in the three domains were above three suggesting positive perceptions on BL. The distribution of perceptions was similar between gender and age. However, significant differences were observed in the course enrollment (P = 0.02). Conclusion Students hold a positive perception of the BL courses being offered in this university. The difference in perceptions among students of different courses suggest that the BL format offered needs modification according to course content to improve its perception. PMID:26500788

  11. The influence of writing practice on letter recognition in preschool children: a comparison between handwriting and typing.

    PubMed

    Longcamp, Marieke; Zerbato-Poudou, Marie-Thérèse; Velay, Jean-Luc

    2005-05-01

    A large body of data supports the view that movement plays a crucial role in letter representation and suggests that handwriting contributes to the visual recognition of letters. If so, changing the motor conditions while children are learning to write by using a method based on typing instead of handwriting should affect their subsequent letter recognition performances. In order to test this hypothesis, we trained two groups of 38 children (aged 3-5 years) to copy letters of the alphabet either by hand or by typing them. After three weeks of learning, we ran two recognition tests, one week apart, to compare the letter recognition performances of the two groups. The results showed that in the older children, the handwriting training gave rise to a better letter recognition than the typing training.

  12. Residents' perceptions of simulation as a clinical learning approach.

    PubMed

    Walsh, Catharine M; Garg, Ankit; Ng, Stella L; Goyal, Fenny; Grover, Samir C

    2017-02-01

    Simulation is increasingly being integrated into medical education; however, there is little research into trainees' perceptions of this learning modality. We elicited trainees' perceptions of simulation-based learning, to inform how simulation is developed and applied to support training. We conducted an instrumental qualitative case study entailing 36 semi-structured one-hour interviews with 12 residents enrolled in an introductory simulation-based course. Trainees were interviewed at three time points: pre-course, post-course, and 4-6 weeks later. Interview transcripts were analyzed using a qualitative descriptive analytic approach. Residents' perceptions of simulation included: 1) simulation serves pragmatic purposes; 2) simulation provides a safe space; 3) simulation presents perils and pitfalls; and 4) optimal design for simulation: integration and tension. Key findings included residents' markedly narrow perception of simulation's capacity to support non-technical skills development or its use beyond introductory learning. Trainees' learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees' a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation.

  13. Residents’ perceptions of simulation as a clinical learning approach

    PubMed Central

    Walsh, Catharine M.; Garg, Ankit; Ng, Stella L.; Goyal, Fenny; Grover, Samir C.

    2017-01-01

    Background Simulation is increasingly being integrated into medical education; however, there is little research into trainees’ perceptions of this learning modality. We elicited trainees’ perceptions of simulation-based learning, to inform how simulation is developed and applied to support training. Methods We conducted an instrumental qualitative case study entailing 36 semi-structured one-hour interviews with 12 residents enrolled in an introductory simulation-based course. Trainees were interviewed at three time points: pre-course, post-course, and 4–6 weeks later. Interview transcripts were analyzed using a qualitative descriptive analytic approach. Results Residents’ perceptions of simulation included: 1) simulation serves pragmatic purposes; 2) simulation provides a safe space; 3) simulation presents perils and pitfalls; and 4) optimal design for simulation: integration and tension. Key findings included residents’ markedly narrow perception of simulation’s capacity to support non-technical skills development or its use beyond introductory learning. Conclusion Trainees’ learning expectations of simulation were restricted. Educators should critically attend to the way they present simulation to learners as, based on theories of problem-framing, trainees’ a priori perceptions may delimit the focus of their learning experiences. If they view simulation as merely a replica of real cases for the purpose of practicing basic skills, they may fail to benefit from the full scope of learning opportunities afforded by simulation. PMID:28344719

  14. Common polymorphism in the oxytocin receptor gene (OXTR) is associated with human social recognition skills

    PubMed Central

    Skuse, David H.; Lori, Adriana; Cubells, Joseph F.; Lee, Irene; Conneely, Karen N.; Puura, Kaija; Lehtimäki, Terho; Binder, Elisabeth B.; Young, Larry J.

    2014-01-01

    The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7–60 y) and sex. A common SNP in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range −0.6 to −1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans. PMID:24367110

  15. Common polymorphism in the oxytocin receptor gene (OXTR) is associated with human social recognition skills.

    PubMed

    Skuse, David H; Lori, Adriana; Cubells, Joseph F; Lee, Irene; Conneely, Karen N; Puura, Kaija; Lehtimäki, Terho; Binder, Elisabeth B; Young, Larry J

    2014-02-04

    The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7-60 y) and sex. A common SNP in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range -0.6 to -1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans.

  16. Effects of oxytocin on behavioral and ERP measures of recognition memory for own-race and other-race faces in women and men

    PubMed Central

    Herzmann, Grit; Bird, Christopher W.; Freeman, Megan; Curran, Tim

    2013-01-01

    Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, doubleblind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin’s effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. PMID:23648370

  17. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  18. Effects of oxytocin on behavioral and ERP measures of recognition memory for own-race and other-race faces in women and men.

    PubMed

    Herzmann, Grit; Bird, Christopher W; Freeman, Megan; Curran, Tim

    2013-10-01

    Oxytocin has been shown to affect human social information processing including recognition memory for faces. Here we investigated the neural processes underlying the effect of oxytocin on memorizing own-race and other-race faces in men and women. In a placebo-controlled, double-blind, between-subject study, participants received either oxytocin or placebo before studying own-race and other-race faces. We recorded event-related potentials (ERPs) during both the study and recognition phase to investigate neural correlates of oxytocin's effect on memory encoding, memory retrieval, and perception. Oxytocin increased the accuracy of familiarity judgments in the recognition test. Neural correlates for this effect were found in ERPs related to memory encoding and retrieval but not perception. In contrast to its facilitating effects on familiarity, oxytocin impaired recollection judgments, but in men only. Oxytocin did not differentially affect own-race and other-race faces. This study shows that oxytocin influences memory, but not perceptual processes, in a face recognition task and is the first to reveal sex differences in the effect of oxytocin on face memory. Contrary to recent findings in oxytocin and moral decision making, oxytocin did not preferentially improve memory for own-race faces. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Medical students' perception of the learning environment at King Saud University Medical College, Saudi Arabia, using DREEM Inventory.

    PubMed

    Soliman, Mona M; Sattar, Kamran; Alnassar, Sami; Alsaif, Faisal; Alswat, Khalid; Alghonaim, Mohamed; Alhaizan, Maysoon; Al-Furaih, Nawaf

    2017-01-01

    The students' perception of the learning environment is an important aspect for evaluation and improvement of the educational program. The College of Medicine at King Saud University (KSU) reformed its curriculum in 2009 from a traditional to a system-oriented hybrid curriculum. The objective of the present study was to determine the perception of the second batch (reformed curriculum) of medical graduates about the educational environment at the College of Medicine, KSU, using the Dundee Ready Education Environment Measure (DREEM) scale. The fifth year medical students were asked to evaluate the educational program after graduation in May 2014. The questionnaire was distributed to the graduate students electronically. The DREEM questionnaire consisted of 50 items based on Likert's scale; and five domains, namely, students' perceptions of learning, perceptions of teachers, academic self-perceptions, perceptions of atmosphere, and social self-perceptions. Data were analyzed using SPSS. A total of 62 students participated in the study. The score for students' perception of learning among medical students ranged from 2.93 to 3.64 (overall mean score: 40.17). The score for students' perception of teachers ranged from 2.85 to 4.01 (overall mean score: 33.35). The score for students' academic self-perceptions ranged from 3.15 to 4.06 (overall mean score: 28.4). The score for students' perception of atmosphere ranged from 2.27 to 3.91 (overall mean score: 41.32). The score for students' social self-perceptions ranged from 2.85 to 4.33 (overall mean score: 24.33). The general perceptions of the students in all five sub-scales were positive. The overall student's perception about the educational environment was satisfactory. This study was important to evaluate the students' perception of the learning environment among medical graduates of the reformed curriculum and provided guidance on areas of improvement in the curriculum.

  20. Learning to distinguish between predators and non-predators: understanding the critical role of diet cues and predator odours in generalisation.

    PubMed

    Mitchell, Matthew D; Chivers, Douglas P; McCormick, Mark I; Ferrari, Maud C O

    2015-09-11

    It is critical for prey to recognise predators and distinguish predators from non-threatening species. Yet, we have little understanding of how prey develop effective predator recognition templates. Recent studies suggest that prey may actually learn key predator features which can be used to recognise novel species with similar characteristics. However, non-predators are sometimes mislabelled as predators when generalising recognition. Here, we conduct the first comprehensive investigation of how prey integrate information on predator odours and predator diet cues in generalisation, allowing them to discriminate between predators and non-predators. We taught lemon damselfish to recognise a predator fed a fish diet, and tested them for their response to the known predator and a series of novel predators (fed fish diet) and non-predators (fed squid diet) distributed across a phylogenetic gradient. Our findings show that damselfish distinguish between predators and non-predators when generalising recognition. Additional experiments revealed that generalised recognition did not result from recognition of predator odours or diet cues, but that damselfish based recognition on what they learned during the initial conditioning. Incorporating multiple sources of information enables prey to develop highly plastic and accurate recognition templates that will increase survival in patchy environments where they have little prior knowledge.

  1. Learning to distinguish between predators and non-predators: understanding the critical role of diet cues and predator odours in generalisation

    PubMed Central

    Mitchell, Matthew D.; Chivers, Douglas P.; McCormick, Mark I.; Ferrari, Maud C.O.

    2015-01-01

    It is critical for prey to recognise predators and distinguish predators from non-threatening species. Yet, we have little understanding of how prey develop effective predator recognition templates. Recent studies suggest that prey may actually learn key predator features which can be used to recognise novel species with similar characteristics. However, non-predators are sometimes mislabelled as predators when generalising recognition. Here, we conduct the first comprehensive investigation of how prey integrate information on predator odours and predator diet cues in generalisation, allowing them to discriminate between predators and non-predators. We taught lemon damselfish to recognise a predator fed a fish diet, and tested them for their response to the known predator and a series of novel predators (fed fish diet) and non-predators (fed squid diet) distributed across a phylogenetic gradient. Our findings show that damselfish distinguish between predators and non-predators when generalising recognition. Additional experiments revealed that generalised recognition did not result from recognition of predator odours or diet cues, but that damselfish based recognition on what they learned during the initial conditioning. Incorporating multiple sources of information enables prey to develop highly plastic and accurate recognition templates that will increase survival in patchy environments where they have little prior knowledge. PMID:26358861

  2. Monocular Advantage for Face Perception Implicates Subcortical Mechanisms in Adult Humans

    PubMed Central

    Gabay, Shai; Nestor, Adrian; Dundas, Eva; Behrmann, Marlene

    2014-01-01

    The ability to recognize faces accurately and rapidly is an evolutionarily adaptive process. Most studies examining the neural correlates of face perception in adult humans have focused on a distributed cortical network of face-selective regions. There is, however, robust evidence from phylogenetic and ontogenetic studies that implicates subcortical structures, and recently, some investigations in adult humans indicate subcortical correlates of face perception as well. The questions addressed here are whether low-level subcortical mechanisms for face perception (in the absence of changes in expression) are conserved in human adults, and if so, what is the nature of these subcortical representations. In a series of four experiments, we presented pairs of images to the same or different eyes. Participants’ performance demonstrated that subcortical mechanisms, indexed by monocular portions of the visual system, play a functional role in face perception. These mechanisms are sensitive to face-like configurations and afford a coarse representation of a face, comprised of primarily low spatial frequency information, which suffices for matching faces but not for more complex aspects of face perception such as sex differentiation. Importantly, these subcortical mechanisms are not implicated in the perception of other visual stimuli, such as cars or letter strings. These findings suggest a conservation of phylogenetically and ontogenetically lower-order systems in adult human face perception. The involvement of subcortical structures in face recognition provokes a reconsideration of current theories of face perception, which are reliant on cortical level processing, inasmuch as it bolsters the cross-species continuity of the biological system for face recognition. PMID:24236767

  3. Why Change to Active Learning? Pre-Service and In-Service Science Teachers' Perceptions

    ERIC Educational Resources Information Center

    O'Grady, Audrey; Simmie, Geraldine Mooney; Kennedy, Therese

    2014-01-01

    This article explores pre-service and in-service science teachers' perceptions on active learning, and examines the effectiveness of active learning by pre-service science teachers in the Irish second level classroom through a two-phase study. In the first phase, data on perceptions were gathered from final year pre-service teachers and in-service…

  4. Linking Recognition Practices and National Qualifications Frameworks: International Benchmarking of Experiences and Strategies on the Recognition, Validation and Accreditation (RVA) of Non-Formal and Informal Learning

    ERIC Educational Resources Information Center

    Singh, Madhu, Ed.; Duvekot, Ruud, Ed.

    2013-01-01

    This publication is the outcome of the international conference organized by UNESCO Institute for Lifelong Learning (UIL), in collaboration with the Centre for Validation of Prior Learning at Inholland University of Applied Sciences, the Netherlands, and in partnership with the French National Commission for UNESCO that was held in Hamburg in…

  5. Towards the Construction of a Personal Professional Pathway: An Experimental Project for the Recognition of Non-Formal and Informal Learning in the University of Catania

    ERIC Educational Resources Information Center

    Piazza, Roberta

    2013-01-01

    In Italy, accreditation of prior learning is a sensitive issue. Despite the lack of laws or qualification frameworks regulating the recognition of non-formal and informal learning, most Italian universities proceed with caution, allowing only a restricted number of credits in the university curriculum related to practical activities or to external…

  6. Learning and Inductive Inference

    DTIC Science & Technology

    1982-07-01

    a set of graph grammars to describe visual scenes . Other researchers have applied graph grammars to the pattern recognition of handwritten characters...345 1. Issues / 345 2. Mostows’ operationalizer / 350 0. Learning from ezamples / 360 1. Issues / 3t60 2. Learning in control and pattern recognition ...art.icleis on rote learntinig and ailvice- tAik g. K(ennieth Clarkson contributed Ltte article on grmvit atical inference, anid Geoff’ lroiney wrote

  7. Leveraging Cognitive Context for Object Recognition

    DTIC Science & Technology

    2014-06-01

    learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by

  8. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technology.

    PubMed

    Álvarez de Toledo, Santiago; Anguera, Aurea; Barreiro, José M; Lara, Juan A; Lizcano, David

    2017-01-19

    Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.

  9. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technology

    PubMed Central

    Álvarez de Toledo, Santiago; Anguera, Aurea; Barreiro, José M.; Lara, Juan A.; Lizcano, David

    2017-01-01

    Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency. PMID:28106849

  10. The relationships between trait anxiety, place recognition memory, and learning strategy.

    PubMed

    Hawley, Wayne R; Grissom, Elin M; Dohanich, Gary P

    2011-01-20

    Rodents learn to navigate mazes using various strategies that are governed by specific regions of the brain. The type of strategy used when learning to navigate a spatial environment is moderated by a number of factors including emotional states. Heightened anxiety states, induced by exposure to stressors or administration of anxiogenic agents, have been found to bias male rats toward the use of a striatum-based stimulus-response strategy rather than a hippocampus-based place strategy. However, no study has yet examined the relationship between natural anxiety levels, or trait anxiety, and the type of learning strategy used by rats on a dual-solution task. In the current experiment, levels of inherent anxiety were measured in an open field and compared to performance on two separate cognitive tasks, a Y-maze task that assessed place recognition memory, and a visible platform water maze task that assessed learning strategy. Results indicated that place recognition memory on the Y-maze correlated with the use of place learning strategy on the water maze. Furthermore, lower levels of trait anxiety correlated positively with better place recognition memory and with the preferred use of place learning strategy. Therefore, competency in place memory and bias in place strategy are linked to the levels of inherent anxiety in male rats. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Writing Strengthens Orthography and Alphabetic-Coding Strengthens Phonology in Learning to Read Chinese

    ERIC Educational Resources Information Center

    Guan, Connie Qun; Liu, Ying; Chan, Derek Ho Leung; Ye, Feifei; Perfetti, Charles A.

    2011-01-01

    Learning to write words may strengthen orthographic representations and thus support word-specific recognition processes. This hypothesis applies especially to Chinese because its writing system encourages character-specific recognition that depends on accurate representation of orthographic form. We report 2 studies that test this hypothesis in…

  12. Integrating Computer-Assisted Language Learning in Saudi Schools: A Change Model

    ERIC Educational Resources Information Center

    Alresheed, Saleh; Leask, Marilyn; Raiker, Andrea

    2015-01-01

    Computer-assisted language learning (CALL) technology and pedagogy have gained recognition globally for their success in supporting second language acquisition (SLA). In Saudi Arabia, the government aims to provide most educational institutions with computers and networking for integrating CALL into classrooms. However, the recognition of CALL's…

  13. Determinants of Teachers' Recognitions of Self-Regulated Learning Practices in Elementary Education

    ERIC Educational Resources Information Center

    Lombaerts, Koen; Engels, Nadine; van Braak, Johan

    2009-01-01

    The authors examined the relations among teacher characteristics, contextual factors, and the recognition of self-regulated learning (SRL). Participants of the survey study were 172 elementary school teachers in the Brussels Capital Region and surrounding area (Belgium). The authors assessed the interrelations of several measures on personal…

  14. Female Undergraduate Student Perceptions of Their Engagement in an Experiential Learning Activity

    ERIC Educational Resources Information Center

    Jahansouz, Sara Lynne

    2012-01-01

    This study explores Panhellenic Sorority Recruitment grounded in a learning-outcomes-based curriculum as a vehicle for student engagement and learning. This study explores the demographics of participants and the perception of learning that occurred within the context of engagement in experiential learning activities during the first week of the…

  15. Ways of Knowing as Learning Styles: Learning MAGIC with a Partner.

    ERIC Educational Resources Information Center

    Galotti, Kathleen M.; Drebus, David W.; Reimer, Rebecca L.

    2001-01-01

    College student pairs learned a complex card game using a scripted set of turns and written explanations, played the game, rated perceptions of and reactions to the learning session and their partner, and completed the Attitudes Toward Thinking and Learning Scale. Significant differences in perceptions of partners and sessions related to…

  16. Improving Student Teachers' Perceptions on Technology Integration Using a Blended Learning Programme

    ERIC Educational Resources Information Center

    Edannur, Sreekala; Marie, S. Maria Josephine Arokia

    2017-01-01

    This study examined student teachers' perceptions about Technology Integration (Blended Learning in this study) before and after their exposure to a Blended Learning Experimental Programme designed for the study for eight weeks. EDMODO (an open access Learning Management System) was used as the teaching learning platform for the implementation of…

  17. Learning Styles of EFL Saudi College-Level Students in On-Line and Traditional Educational Environments

    ERIC Educational Resources Information Center

    Alkhatnai, Mubarak

    2011-01-01

    The primary purpose of this study was to examine Saudi EFL college students' perceptual learning styles in order to determine whether their perception of their learning styles is a predictor of academic persistence, satisfaction and success in different learning environments. Participants' perceptions about their learning styles in both…

  18. Perceptions of School Principals on Participation in Professional Learning Communities as Job-Embedded Learning

    ERIC Educational Resources Information Center

    Gaudioso, Jennifer A.

    2017-01-01

    Perceptions of School Principals on Participation in Professional Learning Communities as Job-Embedded Learning Jennifer Gaudioso Principal Professional Learning Communities (PPLCs) have emerged as a vehicle for professional development of principals, but there is little research on how principals experience PPLCs or how districts can support…

  19. Safety culture perceptions of pharmacists in Malaysian hospitals and health clinics: a multicentre assessment using the Safety Attitudes Questionnaire

    PubMed Central

    Samsuri, Srima Elina; Pei Lin, Lua; Fahrni, Mathumalar Loganathan

    2015-01-01

    Objective To assess the safety attitudes of pharmacists, provide a profile of their domains of safety attitude and correlate their attitudes with self-reported rates of medication errors. Design A cross-sectional study utilising the Safety Attitudes Questionnaire (SAQ). Setting 3 public hospitals and 27 health clinics. Participants 117 pharmacists. Main outcome measure(s) Safety culture mean scores, variation in scores across working units and between hospitals versus health clinics, predictors of safety culture, and medication errors and their correlation. Results Response rate was 83.6% (117 valid questionnaires returned). Stress recognition (73.0±20.4) and working condition (54.8±17.4) received the highest and lowest mean scores, respectively. Pharmacists exhibited positive attitudes towards: stress recognition (58.1%), job satisfaction (46.2%), teamwork climate (38.5%), safety climate (33.3%), perception of management (29.9%) and working condition (15.4%). With the exception of stress recognition, those who worked in health clinics scored higher than those in hospitals (p<0.05) and higher scores (overall score as well as score for each domain except for stress recognition) correlated negatively with reported number of medication errors. Conversely, those working in hospital (versus health clinic) were 8.9 times more likely (p<0.01) to report a medication error (OR 8.9, CI 3.08 to 25.7). As stress recognition increased, the number of medication errors reported increased (p=0.023). Years of work experience (p=0.017) influenced the number of medication errors reported. For every additional year of work experience, pharmacists were 0.87 times less likely to report a medication error (OR 0.87, CI 0.78 to 0.98). Conclusions A minority (20.5%) of the pharmacists working in hospitals and health clinics was in agreement with the overall SAQ questions and scales. Pharmacists in outpatient and ambulatory units and those in health clinics had better perceptions of safety culture. As perceptions improved, the number of medication errors reported decreased. Group-specific interventions that target specific domains are necessary to improve the safety culture. PMID:26610761

  20. Students' Perceptions on Intrapreneurship Education--Prerequisites for Learning Organisations

    ERIC Educational Resources Information Center

    Kansikas, Juha; Murphy, Linda

    2010-01-01

    The aim of this qualitative study is to understand the prerequisites for learning organisations (LO) as perceived by university students. Intrapreneurship education offers possibilities to increase student's adaptation of learning organisation's climate and behaviour. By analysing students' perceptions, more information about learning organisation…

Top