Sample records for pair-based visual recognition

  1. Early Decomposition in Visual Word Recognition: Dissociating Morphology, Form, and Meaning

    ERIC Educational Resources Information Center

    Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi

    2008-01-01

    The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…

  2. Visual Recognition Memory, Paired-Associate Learning, and Reading Achievement.

    ERIC Educational Resources Information Center

    Anderson, Roger H.; Samuels, S. Jay

    The relationship between visual recognition memory and performance on a paired-associate task for good and poor readers was investigated. Subjects were three groups of 21, 21, and 22 children each, with mean IQ's of 98.2, 108.1, and 118.0, respectively. Three experimental tasks, individually administered to each subject, measured visual…

  3. Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?

    PubMed

    Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni

    2015-09-01

    The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).

  4. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    PubMed

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Representational Account of Memory: Insights from Aging and Synesthesia.

    PubMed

    Pfeifer, Gaby; Ward, Jamie; Chan, Dennis; Sigala, Natasha

    2016-12-01

    The representational account of memory envisages perception and memory to be on a continuum rather than in discretely divided brain systems [Bussey, T. J., & Saksida, L. M. Memory, perception, and the ventral visual-perirhinal-hippocampal stream: Thinking outside of the boxes. Hippocampus, 17, 898-908, 2007]. We tested this account using a novel between-group design with young grapheme-color synesthetes, older adults, and young controls. We investigated how the disparate sensory-perceptual abilities between these groups translated into associative memory performance for visual stimuli that do not induce synesthesia. ROI analyses of the entire ventral visual stream showed that associative retrieval (a pair-associate retrieved in the absence of a visual stimulus) yielded enhanced activity in young and older adults' visual regions relative to synesthetes, whereas associative recognition (deciding whether a visual stimulus was the correct pair-associate) was characterized by enhanced activity in synesthetes' visual regions relative to older adults. Whole-brain analyses at associative retrieval revealed an effect of age in early visual cortex, with older adults showing enhanced activity relative to synesthetes and young adults. At associative recognition, the group effect was reversed: Synesthetes showed significantly enhanced activity relative to young and older adults in early visual regions. The inverted group effects observed between retrieval and recognition indicate that reduced sensitivity in visual cortex (as in aging) comes with increased activity during top-down retrieval and decreased activity during bottom-up recognition, whereas enhanced sensitivity (as in synesthesia) shows the opposite pattern. Our results provide novel evidence for the direct contribution of perceptual mechanisms to visual associative memory based on the examples of synesthesia and aging.

  6. The Impact of a Modified Repeated-Reading Strategy Paired with Optical Character Recognition on the Reading Rates of Students with Visual Impairments

    ERIC Educational Resources Information Center

    Pattillo, Suzan Trefry; Heller, Kathryn Wolf; Smith, Maureen

    2004-01-01

    The repeated-reading strategy and optical character recognition were paired to demonstrate a functional relationship between the combined strategies and two factors: the reading rates of students with visual impairments and the students' self-perceptions, or attitudes, toward reading. The results indicated that all five students increased their…

  7. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Functions of graphemic and phonemic codes in visual word-recognition.

    PubMed

    Meyer, D E; Schvaneveldt, R W; Ruddy, M G

    1974-03-01

    Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.

  9. Phoneme Awareness, Visual-Verbal Paired-Associate Learning, and Rapid Automatized Naming as Predictors of Individual Differences in Reading Ability

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hulme, Charles

    2012-01-01

    This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…

  10. Individual recognition based on communication behaviour of male fowl.

    PubMed

    Smith, Carolynn L; Taubert, Jessica; Weldon, Kimberly; Evans, Christopher S

    2016-04-01

    Correctly directing social behaviour towards a specific individual requires an ability to discriminate between conspecifics. The mechanisms of individual recognition include phenotype matching and familiarity-based recognition. Communication-based recognition is a subset of familiarity-based recognition wherein the classification is based on behavioural or distinctive signalling properties. Male fowl (Gallus gallus) produce a visual display (tidbitting) upon finding food in the presence of a female. Females typically approach displaying males. However, males may tidbit without food. We used the distinctiveness of the visual display and the unreliability of some males to test for communication-based recognition in female fowl. We manipulated the prior experience of the hens with the males to create two classes of males: S(+) wherein the tidbitting signal was paired with a food reward to the female, and S (-) wherein the tidbitting signal occurred without food reward. We then conducted a sequential discrimination test with hens using a live video feed of a familiar male. The results of the discrimination tests revealed that hens discriminated between categories of males based on their signalling behaviour. These results suggest that fowl possess a communication-based recognition system. This is the first demonstration of live-to-video transfer of recognition in any species of bird. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Development of Flexible Visual Recognition Memory in Human Infants

    ERIC Educational Resources Information Center

    Robinson, Astri J.; Pascalis, Olivier

    2004-01-01

    Research using the visual paired comparison task has shown that visual recognition memory across changing contexts is dependent on the integrity of the hippocampal formation in human adults and in monkeys. The acquisition of contextual flexibility may contribute to the change in memory performance that occurs late in the first year of life. To…

  12. In infancy the timing of emergence of the other-race effect is dependent on face gender.

    PubMed

    Tham, Diana Su Yun; Bremner, J Gavin; Hay, Dennis

    2015-08-01

    Poorer recognition of other-race faces relative to own-race faces is well documented from late infancy to adulthood. Research has revealed an increase in the other-race effect (ORE) during the first year of life, but there is some disagreement regarding the age at which it emerges. Using cropped faces to eliminate discrimination based on external features, visual paired comparison and spontaneous visual preference measures were used to investigate the relationship between ORE and face gender at 3-4 and 8-9 months. Caucasian-White 3- to 4-month-olds' discrimination of Chinese, Malay, and Caucasian-White faces showed an own-race advantage for female faces alone whereas at 8-9 months the own-race advantage was general across gender. This developmental effect is accompanied by a preference for female over male faces at 4 months and no gender preference at 9 months. The pattern of recognition advantage and preference suggests that there is a shift from a female-based own-race recognition advantage to a general own-race recognition advantage, in keeping with a visual and social experience-based account of ORE. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  14. Euro Banknote Recognition System for Blind People.

    PubMed

    Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael

    2017-01-20

    This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.

  15. Euro Banknote Recognition System for Blind People

    PubMed Central

    Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael

    2017-01-01

    This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively. PMID:28117703

  16. Semantic congruence affects hippocampal response to repetition of visual associations.

    PubMed

    McAndrews, Mary Pat; Girard, Todd A; Wilkins, Leanne K; McCormick, Cornelia

    2016-09-01

    Recent research has shown complementary engagement of the hippocampus and medial prefrontal cortex (mPFC) in encoding and retrieving associations based on pre-existing or experimentally-induced schemas, such that the latter supports schema-congruent information whereas the former is more engaged for incongruent or novel associations. Here, we attempted to explore some of the boundary conditions in the relative involvement of those structures in short-term memory for visual associations. The current literature is based primarily on intentional evaluation of schema-target congruence and on study-test paradigms with relatively long delays between learning and retrieval. We used a continuous recognition paradigm to investigate hippocampal and mPFC activation to first and second presentations of scene-object pairs as a function of semantic congruence between the elements (e.g., beach-seashell versus schoolyard-lamp). All items were identical at first and second presentation and the context scene, which was presented 500ms prior to the appearance of the target object, was incidental to the task which required a recognition response to the central target only. Very short lags 2-8 intervening stimuli occurred between presentations. Encoding the targets with congruent contexts was associated with increased activation in visual cortical regions at initial presentation and faster response time at repetition, but we did not find enhanced activation in mPFC relative to incongruent stimuli at either presentation. We did observe enhanced activation in the right anterior hippocampus, as well as regions in visual and lateral temporal and frontal cortical regions, for the repetition of incongruent scene-object pairs. This pattern demonstrates rapid and incidental effects of schema processing in hippocampal, but not mPFC, engagement during continuous recognition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Unusual target site disruption by the rare-cutting HNH restriction endonuclease PacI

    PubMed Central

    Shen, Betty; Heiter, Daniel F.; Chan, Siu-Hong; Wang, Hua; Xu, Shuang-Yong; Morgan, Richard D.; Wilson, Geoffrey G.; Stoddard, Barry L.

    2010-01-01

    The crystal structure of the rare-cutting HNH restriction endonuclease PacI in complex with its eight base pair target recognition sequence 5'-TTAATTAA-3' has been determined to 1.9 Å resolution. The enzyme forms an extended homodimer, with each subunit containing two zinc-bound motifs surrounding a ββα-metal catalytic site. The latter is unusual in that a tyrosine residue likely initiates strand-cleavage. PacI dramatically distorts its target sequence from Watson-Crick duplex DNA basepairing, with every base separated from its original partner. Two bases on each strand are unpaired, four are engaged in non-canonical A:A and T:T base pairs, and the remaining two bases are matched with new Watson-Crick partners. This represents a highly unusual DNA binding mechanism for a restriction endonuclease, and implies that initial recognition of the target site might involve significantly different contacts from those visualized in the DNA-bound cocrystal structures. PMID:20541511

  18. The Functional Architecture of Visual Object Recognition

    DTIC Science & Technology

    1991-07-01

    different forms of agnosia can provide clues to the representations underlying normal object recognition (Farah, 1990). For example, the pair-wise...patterns of deficit and sparing occur. In a review of 99 published cases of agnosia , the observed patterns of co- occurrence implicated two underlying

  19. Effects of Phonological and Orthographic Shifts on Children's Processing of Written Morphology: A Time-Course Study

    ERIC Educational Resources Information Center

    Quémart, Pauline; Casalis, Séverine

    2014-01-01

    We report two experiments that investigated whether phonological and/or orthographic shifts in a base word interfere with morphological processing by French 3rd, 4th, and 5th graders and adults (as a control group) along the time course of visual word recognition. In both experiments, prime-target pairs shared four possible relationships:…

  20. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    PubMed

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.

  1. A rule of seven in Watson-Crick base-pairing of mismatched sequences.

    PubMed

    Cisse, Ibrahim I; Kim, Hajin; Ha, Taekjip

    2012-05-13

    Sequence recognition through base-pairing is essential for DNA repair and gene regulation, but the basic rules governing this process remain elusive. In particular, the kinetics of annealing between two imperfectly matched strands is not well characterized, despite its potential importance in nucleic acid-based biotechnologies and gene silencing. Here we use single-molecule fluorescence to visualize the multiple annealing and melting reactions of two untethered strands inside a porous vesicle, allowing us to precisely quantify the annealing and melting rates. The data as a function of mismatch position suggest that seven contiguous base pairs are needed for rapid annealing of DNA and RNA. This phenomenological rule of seven may underlie the requirement for seven nucleotides of complementarity to seed gene silencing by small noncoding RNA and may help guide performance improvement in DNA- and RNA-based bio- and nanotechnologies, in which off-target effects can be detrimental.

  2. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  3. Molecular recognition of DNA base pairs by the formamido/pyrrole and formamido/imidazole pairings in stacked polyamides.

    PubMed

    Buchmueller, Karen L; Staples, Andrew M; Uthe, Peter B; Howard, Cameron M; Pacheco, Kimberly A O; Cox, Kari K; Henry, James A; Bailey, Suzanna L; Horick, Sarah M; Nguyen, Binh; Wilson, W David; Lee, Moses

    2005-01-01

    Polyamides containing an N-terminal formamido (f) group bind to the minor groove of DNA as staggered, antiparallel dimers in a sequence-specific manner. The formamido group increases the affinity and binding site size, and it promotes the molecules to stack in a staggered fashion thereby pairing itself with either a pyrrole (Py) or an imidazole (Im). There has not been a systematic study on the DNA recognition properties of the f/Py and f/Im terminal pairings. These pairings were analyzed here in the context of f-ImPyPy, f-ImPyIm, f-PyPyPy and f-PyPyIm, which contain the central pairing modes, -ImPy- and -PyPy-. The specificity of these triamides towards symmetrical recognition sites allowed for the f/Py and f/Im terminal pairings to be directly compared by SPR, CD and DeltaT (M) experiments. The f/Py pairing, when placed next to the -ImPy- or -PyPy- central pairings, prefers A/T and T/A base pairs to G/C base pairs, suggesting that f/Py has similar DNA recognition specificity to Py/Py. With -ImPy- central pairings, f/Im prefers C/G base pairs (>10 times) to the other Watson-Crick base pairs; therefore, f/Im behaves like the Py/Im pair. However, the f/Im pairing is not selective for the C/G base pair when placed next to the -PyPy- central pairings.

  4. Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.

    PubMed

    Acha, Joana; Perea, Manuel

    2010-01-01

    Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.

  5. Stimulus Similarity and Encoding Time Influence Incidental Recognition Memory in Adult Monkeys with Selective Hippocampal Lesions

    ERIC Educational Resources Information Center

    Zeamer, Alyson; Meunier, Martine; Bachevalier, Jocelyne

    2011-01-01

    Recognition memory impairment after selective hippocampal lesions in monkeys is more profound when measured with visual paired-comparison (VPC) than with delayed nonmatching-to-sample (DNMS). To clarify this issue, we assessed the impact of stimuli similarity and encoding duration on the VPC performance in monkeys with hippocampal lesions and…

  6. Molecular recognition of DNA base pairs by the formamido/pyrrole and formamido/imidazole pairings in stacked polyamides

    PubMed Central

    Buchmueller, Karen L.; Staples, Andrew M.; Uthe, Peter B.; Howard, Cameron M.; Pacheco, Kimberly A. O.; Cox, Kari K.; Henry, James A.; Bailey, Suzanna L.; Horick, Sarah M.; Nguyen, Binh; Wilson, W. David; Lee, Moses

    2005-01-01

    Polyamides containing an N-terminal formamido (f) group bind to the minor groove of DNA as staggered, antiparallel dimers in a sequence-specific manner. The formamido group increases the affinity and binding site size, and it promotes the molecules to stack in a staggered fashion thereby pairing itself with either a pyrrole (Py) or an imidazole (Im). There has not been a systematic study on the DNA recognition properties of the f/Py and f/Im terminal pairings. These pairings were analyzed here in the context of f-ImPyPy, f-ImPyIm, f-PyPyPy and f-PyPyIm, which contain the central pairing modes, –ImPy– and –PyPy–. The specificity of these triamides towards symmetrical recognition sites allowed for the f/Py and f/Im terminal pairings to be directly compared by SPR, CD and ΔTM experiments. The f/Py pairing, when placed next to the –ImPy– or –PyPy– central pairings, prefers A/T and T/A base pairs to G/C base pairs, suggesting that f/Py has similar DNA recognition specificity to Py/Py. With –ImPy– central pairings, f/Im prefers C/G base pairs (>10 times) to the other Watson–Crick base pairs; therefore, f/Im behaves like the Py/Im pair. However, the f/Im pairing is not selective for the C/G base pair when placed next to the –PyPy– central pairings. PMID:15703305

  7. Imidazopyridine/Pyrrole and hydroxybenzimidazole/pyrrole pairs for DNA minor groove recognition.

    PubMed

    Renneberg, Dorte; Dervan, Peter B

    2003-05-14

    The DNA binding properties of fused heterocycles imidazo[4,5-b]pyridine (Ip) and hydroxybenzimidazole (Hz) paired with pyrrole (Py) in eight-ring hairpin polyamides are reported. The recognition profile of Ip/Py and Hz/Py pairs were compared to the five-membered ring pairs Im/Py and Hp/Py on a DNA restriction fragment at four 6-base pair recognition sites which vary at a single position 5'-TGTNTA-3', where N = G, C, T, A. The Ip/Py pair distinguishes G.C from C.G, T.A, and A.T, and the Hz/Py pair distinguishes T.A from A.T, G.C, and C.G, affording a new set of heterocycle pairs to target the four Watson-Crick base pairs in the minor groove of DNA.

  8. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  9. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  10. Recognition of Watson-Crick base pairs: constraints and limits due to geometric selection and tautomerism

    PubMed Central

    Yusupov, Marat; Yusupova, Gulnara

    2014-01-01

    The natural bases of nucleic acids have a strong preference for one tautomer form, guaranteeing fidelity in their hydrogen bonding potential. However, base pairs observed in recent crystal structures of polymerases and ribosomes are best explained by an alternative base tautomer, leading to the formation of base pairs with Watson-Crick-like geometries. These observations set limits to geometric selection in molecular recognition of complementary Watson-Crick pairs for fidelity in replication and translation processes. PMID:24765524

  11. Study and response time for the visual recognition of 'similarity' and identity

    NASA Technical Reports Server (NTRS)

    Derks, P. L.; Bauer, T. M.

    1974-01-01

    Four subjects compared successively presented pairs of line patterns for a match between any lines in the pattern (similarity) and for a match between all lines (identity). The encoding or study times for pattern recognition from immediate memory and the latency in responses to comparison stimuli were examined. Qualitative differences within and between subjects were most evident in study times.

  12. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  13. Visual attention: low-level and high-level viewpoints

    NASA Astrophysics Data System (ADS)

    Stentiford, Fred W. M.

    2012-06-01

    This paper provides a brief outline of the approaches to modeling human visual attention. Bottom-up and top-down mechanisms are described together with some of the problems that they face. It has been suggested in brain science that memory functions by trading measurement precision for associative power; sensory inputs from the environment are never identical on separate occasions, but the associations with memory compensate for the differences. A graphical representation for image similarity is described that relies on the size of maximally associative structures (cliques) that are found to reflect between pairs of images. This is applied to the recognition of movie posters, the location and recognition of characters, and the recognition of faces. The similarity mechanism is shown to model popout effects when constraints are placed on the physical separation of pixels that correspond to nodes in the maximal cliques. The effect extends to modeling human visual behaviour on the Poggendorff illusion.

  14. Minimizing Skin Color Differences Does Not Eliminate the Own-Race Recognition Advantage in Infants

    PubMed Central

    Anzures, Gizelle; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Lee, Kang

    2011-01-01

    An abundance of experience with own-race faces and limited to no experience with other-race faces has been associated with better recognition memory for own-race faces in infants, children, and adults. This study investigated the developmental origins of this other-race effect (ORE) by examining the role of a salient perceptual property of faces—that of skin color. Six- and 9-month-olds’ recognition memory for own- and other-race faces was examined using infant-controlled habituation and visual-paired comparison at test. Infants were shown own- or other-race faces in color or with skin color cues minimized in grayscale images. Results for the color stimuli replicated previous findings that infants show an ORE in face recognition memory. Results for the grayscale stimuli showed that even when a salient perceptual cue to race, such as skin color information, is minimized, 6- to 9-month-olds, nonetheless, show an ORE in their face recognition memory. Infants’ use of shape-based and configural cues for face recognition is discussed. PMID:22039335

  15. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  16. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  17. Functional cross‐hemispheric shift between object‐place paired associate memory and spatial memory in the human hippocampus

    PubMed Central

    Lee, Choong‐Hee; Ryu, Jungwon; Lee, Sang‐Hun; Kim, Hakjin

    2016-01-01

    ABSTRACT The hippocampus plays critical roles in both object‐based event memory and spatial navigation, but it is largely unknown whether the left and right hippocampi play functionally equivalent roles in these cognitive domains. To examine the hemispheric symmetry of human hippocampal functions, we used an fMRI scanner to measure BOLD activity while subjects performed tasks requiring both object‐based event memory and spatial navigation in a virtual environment. Specifically, the subjects were required to form object‐place paired associate memory after visiting four buildings containing discrete objects in a virtual plus maze. The four buildings were visually identical, and the subjects used distal visual cues (i.e., scenes) to differentiate the buildings. During testing, the subjects were required to identify one of the buildings when cued with a previously associated object, and when shifted to a random place, the subject was expected to navigate to the previously chosen building. We observed that the BOLD activity foci changed from the left hippocampus to the right hippocampus as task demand changed from identifying a previously seen object (object‐cueing period) to searching for its paired‐associate place (object‐cued place recognition period). Furthermore, the efficient retrieval of object‐place paired associate memory (object‐cued place recognition period) was correlated with the BOLD response of the left hippocampus, whereas the efficient retrieval of relatively pure spatial memory (spatial memory period) was correlated with the right hippocampal BOLD response. These findings suggest that the left and right hippocampi in humans might process qualitatively different information for remembering episodic events in space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27009679

  18. Visual Recognition of Age Class and Preference for Infantile Features: Implications for Species-Specific vs Universal Cognitive Traits in Primates

    PubMed Central

    Lemasson, Alban; Nagumo, Sumiharu; Masataka, Nobuo

    2012-01-01

    Despite not knowing the exact age of individuals, humans can estimate their rough age using age-related physical features. Nonhuman primates show some age-related physical features; however, the cognitive traits underlying their recognition of age class have not been revealed. Here, we tested the ability of two species of Old World monkey, Japanese macaques (JM) and Campbell's monkeys (CM), to spontaneously discriminate age classes using visual paired comparison (VPC) tasks based on the two distinct categories of infant and adult images. First, VPCs were conducted in JM subjects using conspecific JM stimuli. When analyzing the side of the first look, JM subjects significantly looked more often at novel images. Based on analyses of total looking durations, JM subjects looked at a novel infant image longer than they looked at a familiar adult image, suggesting the ability to spontaneously discriminate between the two age classes and a preference for infant over adult images. Next, VPCs were tested in CM subjects using heterospecific JM stimuli. CM subjects showed no difference in the side of their first look, but looked at infant JM images longer than they looked at adult images; the fact that CMs were totally naïve to JMs suggested that the attractiveness of infant images transcends species differences. This is the first report of visual age class recognition and a preference for infant over adult images in nonhuman primates. Our results suggest not only species-specific processing for age class recognition but also the evolutionary origins of the instinctive human perception of baby cuteness schema, proposed by the ethologist Konrad Lorenz. PMID:22685529

  19. Visual recognition of age class and preference for infantile features: implications for species-specific vs universal cognitive traits in primates.

    PubMed

    Sato, Anna; Koda, Hiroki; Lemasson, Alban; Nagumo, Sumiharu; Masataka, Nobuo

    2012-01-01

    Despite not knowing the exact age of individuals, humans can estimate their rough age using age-related physical features. Nonhuman primates show some age-related physical features; however, the cognitive traits underlying their recognition of age class have not been revealed. Here, we tested the ability of two species of Old World monkey, Japanese macaques (JM) and Campbell's monkeys (CM), to spontaneously discriminate age classes using visual paired comparison (VPC) tasks based on the two distinct categories of infant and adult images. First, VPCs were conducted in JM subjects using conspecific JM stimuli. When analyzing the side of the first look, JM subjects significantly looked more often at novel images. Based on analyses of total looking durations, JM subjects looked at a novel infant image longer than they looked at a familiar adult image, suggesting the ability to spontaneously discriminate between the two age classes and a preference for infant over adult images. Next, VPCs were tested in CM subjects using heterospecific JM stimuli. CM subjects showed no difference in the side of their first look, but looked at infant JM images longer than they looked at adult images; the fact that CMs were totally naïve to JMs suggested that the attractiveness of infant images transcends species differences. This is the first report of visual age class recognition and a preference for infant over adult images in nonhuman primates. Our results suggest not only species-specific processing for age class recognition but also the evolutionary origins of the instinctive human perception of baby cuteness schema, proposed by the ethologist Konrad Lorenz.

  20. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    PubMed

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology. (c) 2015 APA, all rights reserved).

  1. Evaluating structural pattern recognition for handwritten math via primitive label graphs

    NASA Astrophysics Data System (ADS)

    Zanibbi, Richard; Mouchère, Harold; Viard-Gaudin, Christian

    2013-01-01

    Currently, structural pattern recognizer evaluations compare graphs of detected structure to target structures (i.e. ground truth) using recognition rates, recall and precision for object segmentation, classification and relationships. In document recognition, these target objects (e.g. symbols) are frequently comprised of multiple primitives (e.g. connected components, or strokes for online handwritten data), but current metrics do not characterize errors at the primitive level, from which object-level structure is obtained. Primitive label graphs are directed graphs defined over primitives and primitive pairs. We define new metrics obtained by Hamming distances over label graphs, which allow classification, segmentation and parsing errors to be characterized separately, or using a single measure. Recall and precision for detected objects may also be computed directly from label graphs. We illustrate the new metrics by comparing a new primitive-level evaluation to the symbol-level evaluation performed for the CROHME 2012 handwritten math recognition competition. A Python-based set of utilities for evaluating, visualizing and translating label graphs is publicly available.

  2. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  3. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  4. [Hemispheric differences in letter matching of hiragana and katakana].

    PubMed

    Iizuka, K; Sato, H

    1992-07-01

    The purpose of the present study was to examine the hemispheric differences in letter matching of hiragana and katakana. The stimuli with a pair of each one letter of hiragana and katakana were presented unilaterally to the right or left visual hemifield with a tachistoscope. The subjects were 40 male right handers. They were required to judge whether a pair of letters had the same name or different one. A significant right visual hemifield superiority was observed for both the accuracy of recognition and reaction time. The results suggest that the callosal relay model of Zaidel may be applied to the name matching task.

  5. Enzymatic Incorporation of Modified Purine Nucleotides in DNA.

    PubMed

    Abu El Asrar, Rania; Margamuljana, Lia; Abramov, Mikhail; Bande, Omprakash; Agnello, Stefano; Jang, Miyeon; Herdewijn, Piet

    2017-12-14

    A series of nucleotide analogues, with a hypoxanthine base moiety (8-aminohypoxanthine, 1-methyl-8-aminohypoxanthine, and 8-oxohypoxanthine), together with 5-methylisocytosine were tested as potential pairing partners of N 8 -glycosylated nucleotides with an 8-azaguanine or 8-aza-9-deazaguanine base moiety by using DNA polymerases (incorporation studies). The best results were obtained with the 5-methylisocytosine nucleotide followed by the 1-methyl-8-aminohypoxanthine nucleotide. The experiments demonstrated that small differences in the structure (8-azaguanine versus 8-aza-9-deazaguanine) might lead to significant differences in recognition efficiency and selectivity, base pairing by Hoogsteen recognition at the polymerase level is possible, 8-aza-9-deazaguanine represents a self-complementary base pair, and a correlation exists between in vitro incorporation studies and in vivo recognition by natural bases in Escherichia coli, but this recognition is not absolute (exceptions were observed). © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. DNA sequence alignment by microhomology sampling during homologous recombination

    PubMed Central

    Qi, Zhi; Redding, Sy; Lee, Ja Yil; Gibb, Bryan; Kwon, YoungHo; Niu, Hengyao; Gaines, William A.; Sung, Patrick

    2015-01-01

    Summary Homologous recombination (HR) mediates the exchange of genetic information between sister or homologous chromatids. During HR, members of the RecA/Rad51 family of recombinases must somehow search through vast quantities of DNA sequence to align and pair ssDNA with a homologous dsDNA template. Here we use single-molecule imaging to visualize Rad51 as it aligns and pairs homologous DNA sequences in real-time. We show that Rad51 uses a length-based recognition mechanism while interrogating dsDNA, enabling robust kinetic selection of 8-nucleotide (nt) tracts of microhomology, which kinetically confines the search to sites with a high probability of being a homologous target. Successful pairing with a 9th nucleotide coincides with an additional reduction in binding free energy and subsequent strand exchange occurs in precise 3-nt steps, reflecting the base triplet organization of the presynaptic complex. These findings provide crucial new insights into the physical and evolutionary underpinnings of DNA recombination. PMID:25684365

  7. Orthographic Processing in Visual Word Identification.

    ERIC Educational Resources Information Center

    Humphreys, Glyn W.; And Others

    1990-01-01

    A series of 6 experiments involving 210 subjects from a college subject pool examined orthographic priming effects between briefly presented pairs of letter strings. A theory of othographic priming is presented, and the implications of the findings for understanding word recognition and reading are discussed. (SLD)

  8. Immediate effects of form-class constraints on spoken word recognition

    PubMed Central

    Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408

  9. Cortical Networks for Visual Self-Recognition

    NASA Astrophysics Data System (ADS)

    Sugiura, Motoaki

    This paper briefly reviews recent developments regarding the brain mechanisms of visual self-recognition. A special cognitive mechanism for visual self-recognition has been postulated based on behavioral and neuropsychological evidence, but its neural substrate remains controversial. Recent functional imaging studies suggest that multiple cortical mechanisms play self-specific roles during visual self-recognition, reconciling the existing controversy. Respective roles for the left occipitotemporal, right parietal, and frontal cortices in symbolic, visuospatial, and conceptual aspects of self-representation have been proposed.

  10. Signed reward prediction errors drive declarative learning

    PubMed Central

    Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493

  11. Signed reward prediction errors drive declarative learning.

    PubMed

    De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  12. Cross-modal working memory binding and word recognition skills: how specific is the link?

    PubMed

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  13. Recognition Decisions From Visual Working Memory Are Mediated by Continuous Latent Strengths.

    PubMed

    Ricker, Timothy J; Thiele, Jonathan E; Swagman, April R; Rouder, Jeffrey N

    2017-08-01

    Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the contents of visual working memory follow a continuous decision process of graded information about the correct choice or a discrete decision process reflecting only knowing and guessing. We find a clear pattern in favor of a continuous latent strength model of visual working memory-based decision making, supporting the notion that visual recognition decision processes are impacted by the degree of matching between the contents of working memory and the choices given. Relation to relevant findings and the implications for human information processing more generally are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  14. Visual habit formation in monkeys with neurotoxic lesions of the ventrocaudal neostriatum

    PubMed Central

    Fernandez-Ruiz, Juan; Wang, Jin; Aigner, Thomas G.; Mishkin, Mortimer

    2001-01-01

    Visual habit formation in monkeys, assessed by concurrent visual discrimination learning with 24-h intertrial intervals (ITI), was found earlier to be impaired by removal of the inferior temporal visual area (TE) but not by removal of either the medial temporal lobe or inferior prefrontal convexity, two of TE's major projection targets. To assess the role in this form of learning of another pair of structures to which TE projects, namely the rostral portion of the tail of the caudate nucleus and the overlying ventrocaudal putamen, we injected a neurotoxin into this neostriatal region of several monkeys and tested them on the 24-h ITI task as well as on a test of visual recognition memory. Compared with unoperated monkeys, the experimental animals were unaffected on the recognition test but showed an impairment on the 24-h ITI task that was highly correlated with the extent of their neostriatal damage. The findings suggest that TE and its projection areas in the ventrocaudal neostriatum form part of a circuit that selectively mediates visual habit formation. PMID:11274442

  15. Long-Term Visuo-Gustatory Appetitive and Aversive Conditioning Potentiate Human Visual Evoked Potentials

    PubMed Central

    Christoffersen, Gert R. J.; Laugesen, Jakob L.; Møller, Per; Bredie, Wender L. P.; Schachtman, Todd R.; Liljendahl, Christina; Viemose, Ida

    2017-01-01

    Human recognition of foods and beverages are often based on visual cues associated with flavors. The dynamics of neurophysiological plasticity related to acquisition of such long-term associations has only recently become the target of investigation. In the present work, the effects of appetitive and aversive visuo-gustatory conditioning were studied with high density EEG-recordings focusing on late components in the visual evoked potentials (VEPs), specifically the N2-P3 waves. Unfamiliar images were paired with either a pleasant or an unpleasant juice and VEPs evoked by the images were compared before and 1 day after the pairings. In electrodes located over posterior visual cortex areas, the following changes were observed after conditioning: the amplitude from the N2-peak to the P3-peak increased and the N2 peak delay was reduced. The percentage increase of N2-to-P3 amplitudes was asymmetrically distributed over the posterior hemispheres despite the fact that the images were bilaterally symmetrical across the two visual hemifields. The percentage increases of N2-to-P3 amplitudes in each experimental subject correlated with the subject’s evaluation of positive or negative hedonic valences of the two juices. The results from 118 scalp electrodes gave surface maps of theta power distributions showing increased power over posterior visual areas after the pairings. Source current distributions calculated from swLORETA revealed that visual evoked currents rose as a result of conditioning in five cortical regions—from primary visual areas and into the inferior temporal gyrus (ITG). These learning-induced changes were seen after both appetitive and aversive training while a sham trained control group showed no changes. It is concluded that long-term visuo-gustatory conditioning potentiated the N2-P3 complex, and it is suggested that the changes are regulated by the perceived hedonic valence of the US. PMID:28983243

  16. Visual memory and sustained attention impairment in youths with autism spectrum disorders.

    PubMed

    Chien, Y-L; Gau, S S-F; Shang, C-Y; Chiu, Y-N; Tsai, W-C; Wu, Y-Y

    2015-08-01

    An uneven neurocognitive profile is a hallmark of autism spectrum disorder (ASD). Studies focusing on the visual memory performance in ASD have shown controversial results. We investigated visual memory and sustained attention in youths with ASD and typically developing (TD) youths. We recruited 143 pairs of youths with ASD (males 93.7%; mean age 13.1, s.d. 3.5 years) and age- and sex-matched TD youths. The ASD group consisted of 67 youths with autistic disorder (autism) and 76 with Asperger's disorder (AS) based on the DSM-IV criteria. They were assessed using the Cambridge Neuropsychological Test Automated Battery involving the visual memory [spatial recognition memory (SRM), delayed matching to sample (DMS), paired associates learning (PAL)] and sustained attention (rapid visual information processing; RVP). Youths with ASD performed significantly worse than TD youths on most of the tasks; the significance disappeared in the superior intelligence quotient (IQ) subgroup. The response latency on the tasks did not differ between the ASD and TD groups. Age had significant main effects on SRM, DMS, RVP and part of PAL tasks and had an interaction with diagnosis in DMS and RVP performance. There was no significant difference between autism and AS on visual tasks. Our findings implied that youths with ASD had a wide range of visual memory and sustained attention impairment that was moderated by age and IQ, which supports temporal and frontal lobe dysfunction in ASD. The lack of difference between autism and AS implies that visual memory and sustained attention cannot distinguish these two ASD subtypes, which supports DSM-5 ASD criteria.

  17. Aging and IQ effects on associative recognition and priming in item recognition

    PubMed Central

    McKoon, Gail; Ratcliff, Roger

    2012-01-01

    Two ways to examine memory for associative relationships between pairs of words were tested: an explicit method, associative recognition, and an implicit method, priming in item recognition. In an experiment with both kinds of tests, participants were asked to learn pairs of words. For the explicit test, participants were asked to decide whether two words of a test pair had been studied in the same or different pairs. For the implicit test, participants were asked to decide whether single words had or had not been among the studied pairs. Some test words were immediately preceded in the test list by the other word of the same pair and some by a word from a different pair. Diffusion model (Ratcliff, 1978; Ratcliff & McKoon, 2008) analyses were carried out for both tasks for college-age participants, 60–74 year olds, and 75–90 year olds, and for higher- and lower-IQ participants, in order to compare the two measures of associative strength. Results showed parallel behavior of drift rates for associative recognition and priming across ages and across IQ, indicating that they are based, at least to some degree, on the same information in memory. PMID:24976676

  18. Perceptual Effects of Social Salience: Evidence from Self-Prioritization Effects on Perceptual Matching

    ERIC Educational Resources Information Center

    Sui, Jie; He, Xun; Humphreys, Glyn W.

    2012-01-01

    We present novel evidence showing that new self-relevant visual associations can affect performance in simple shape recognition tasks. Participants associated labels for themselves, other people, or neutral terms with geometric shapes and then immediately judged whether subsequent label-shape pairings were matched. Across 4 experiments there was a…

  19. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  20. Differential Effects of the Factor Structure of the Wechsler Memory Scale-Revised on the Cortical Thickness and Complexity of Patients Aged Over 75 Years in a Memory Clinic Setting.

    PubMed

    Kinno, Ryuta; Shiromaru, Azusa; Mori, Yukiko; Futamura, Akinori; Kuroda, Takeshi; Yano, Satoshi; Murakami, Hidetomo; Ono, Kenjiro

    2017-01-01

    The Wechsler Memory Scale-Revised (WMS-R) is one of the internationally well-known batteries for memory assessment in a general memory clinic setting. Several factor structures of the WMS-R for patients aged under 74 have been proposed. However, little is known about the factor structure of the WMS-R for patients aged over 75 years and its neurological significance. Thus, we conducted exploratory factor analysis to determine the factor structure of the WMS-R for patients aged over 75 years in a memory clinic setting. Regional cerebral blood flow (rCBF) was calculated from single-photon emission computed tomography data. Cortical thickness and cortical fractal dimension, as the marker of cortical complexity, were calculated from high resolution magnetic resonance imaging data. We found that the four factors appeared to be the most appropriate solution to the model, including recognition memory, paired associate memory, visual-and-working memory, and attention as factors. Patients with mild cognitive impairments showed significantly higher factor scores for paired associate memory, visual-and-working memory, and attention than patients with Alzheimer's disease. Regarding the neuroimaging data, the factor scores for paired associate memory positively correlated with rCBF in the left pericallosal and hippocampal regions. Moreover, the factor score for paired associate memory showed most robust correlations with the cortical thickness in the limbic system, whereas the factor score for attention correlated with the cortical thickness in the bilateral precuneus. Furthermore, each factor score correlated with the cortical fractal dimension in the bilateral frontotemporal regions. Interestingly, the factor scores for the visual-and-working memory and attention selectively correlated with the cortical fractal dimension in the right posterior cingulate cortex and right precuneus cortex, respectively. These findings demonstrate that recognition memory, paired associate memory, visual-and-working memory, and attention can be crucial factors for interpreting the WMS-R results of elderly patients aged over 75 years in a memory clinic setting. Considering these findings, the results of WMS-R in elderly patients aged over 75 years in a memory clinic setting should be cautiously interpreted.

  1. Robot Command Interface Using an Audio-Visual Speech Recognition System

    NASA Astrophysics Data System (ADS)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  2. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  3. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  4. Word segmentation in phonemically identical and prosodically different sequences using cochlear implants: A case study.

    PubMed

    Basirat, Anahita

    2017-01-01

    Cochlear implant (CI) users frequently achieve good speech understanding based on phoneme and word recognition. However, there is a significant variability between CI users in processing prosody. The aim of this study was to examine the abilities of an excellent CI user to segment continuous speech using intonational cues. A post-lingually deafened adult CI user and 22 normal hearing (NH) subjects segmented phonemically identical and prosodically different sequences in French such as 'l'affiche' (the poster) versus 'la fiche' (the sheet), both [lafiʃ]. All participants also completed a minimal pair discrimination task. Stimuli were presented in auditory-only and audiovisual presentation modalities. The performance of the CI user in the minimal pair discrimination task was 97% in the auditory-only and 100% in the audiovisual condition. In the segmentation task, contrary to the NH participants, the performance of the CI user did not differ from the chance level. Visual speech did not improve word segmentation. This result suggests that word segmentation based on intonational cues is challenging when using CIs even when phoneme/word recognition is very well rehabilitated. This finding points to the importance of the assessment of CI users' skills in prosody processing and the need for specific interventions focusing on this aspect of speech communication.

  5. Gender in facial representations: a contrast-based study of adaptation within and between the sexes.

    PubMed

    Oruç, Ipek; Guo, Xiaoyue M; Barton, Jason J S

    2011-01-18

    Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100 ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space.

  6. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  7. Implicit recognition based on lateralized perceptual fluency.

    PubMed

    Vargas, Iliana M; Voss, Joel L; Paller, Ken A

    2012-02-06

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  8. Implicit phonological priming during visual word recognition.

    PubMed

    Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C

    2011-03-15

    Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Getting the Gist of Events: Recognition of Two-Participant Actions from Brief Displays

    PubMed Central

    Hafri, Alon; Papafragou, Anna; Trueswell, John C.

    2013-01-01

    Unlike rapid scene and object recognition from brief displays, little is known about recognition of event categories and event roles from minimal visual information. In three experiments, we displayed naturalistic photographs of a wide range of two-participant event scenes for 37 ms and 73 ms followed by a mask, and found that event categories (the event gist, e.g., ‘kicking’, ‘pushing’, etc.) and event roles (i.e., Agent and Patient) can be recognized rapidly, even with various actor pairs and backgrounds. Norming ratings from a subsequent experiment revealed that certain physical features (e.g., outstretched extremities) that correlate with Agent-hood could have contributed to rapid role recognition. In a final experiment, using identical twin actors, we then varied these features in two sets of stimuli, in which Patients had Agent-like features or not. Subjects recognized the roles of event participants less accurately when Patients possessed Agent-like features, with this difference being eliminated with two-second durations. Thus, given minimal visual input, typical Agent-like physical features are used in role recognition but, with sufficient input from multiple fixations, people categorically determine the relationship between event participants. PMID:22984951

  10. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    PubMed

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p < .01), (3) males but not females evaluated the images with a relative bias towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.

  11. Simulation of talking faces in the human brain improves auditory speech recognition

    PubMed Central

    von Kriegstein, Katharina; Dogan, Özgür; Grüter, Martina; Giraud, Anne-Lise; Kell, Christian A.; Grüter, Thomas; Kleinschmidt, Andreas; Kiebel, Stefan J.

    2008-01-01

    Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face. PMID:18436648

  12. The first does the work, but the third time's the charm: the effects of massed repetition on episodic encoding of multimodal face-name associations.

    PubMed

    Mangels, Jennifer A; Manzi, Alberto; Summerfield, Christopher

    2010-03-01

    In social interactions, it is often necessary to rapidly encode the association between visually presented faces and auditorily presented names. The present study used event-related potentials to examine the neural correlates of associative encoding for multimodal face-name pairs. We assessed study-phase processes leading to high-confidence recognition of correct pairs (and consistent rejection of recombined foils) as compared to lower-confidence recognition of correct pairs (with inconsistent rejection of recombined foils) and recognition failures (misses). Both high- and low-confidence retrieval of face-name pairs were associated with study-phase activity suggestive of item-specific processing of the face (posterior inferior temporal negativity) and name (fronto-central negativity). However, only those pairs later retrieved with high confidence recruited a sustained centro-parietal positivity that an ancillary localizer task suggested may index an association-unique process. Additionally, we examined how these processes were influenced by massed repetition, a mnemonic strategy commonly employed in everyday situations to improve face-name memory. Differences in subsequent memory effects across repetitions suggested that associative encoding was strongest at the initial presentation, and thus, that the initial presentation has the greatest impact on memory formation. Yet, exploratory analyses suggested that the third presentation may have benefited later memory by providing an opportunity for extended processing of the name. Thus, although encoding of the initial presentation was critical for establishing a strong association, the extent to which processing was sustained across subsequent immediate (massed) presentations may provide additional encoding support that serves to differentiate face-name pairs from similar (recombined) pairs by providing additional encoding opportunities for the less dominant stimulus dimension (i.e., name).

  13. An ERP investigation of visual word recognition in syllabary scripts.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2013-06-01

    The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.

  14. An ERP Investigation of Visual Word Recognition in Syllabary Scripts

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2013-01-01

    The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278

  15. Utterance independent bimodal emotion recognition in spontaneous communication

    NASA Astrophysics Data System (ADS)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  16. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  17. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    PubMed

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual recognition. Copyright © 2010 Elsevier Srl. All rights reserved.

  18. Superordinate Level Processing Has Priority Over Basic-Level Processing in Scene Gist Recognition

    PubMed Central

    Sun, Qi; Zheng, Yang; Sun, Mingxia; Zheng, Yuanjie

    2016-01-01

    By combining a perceptual discrimination task and a visuospatial working memory task, the present study examined the effects of visuospatial working memory load on the hierarchical processing of scene gist. In the perceptual discrimination task, two scene images from the same (manmade–manmade pairing or natural–natural pairing) or different superordinate level categories (manmade–natural pairing) were presented simultaneously, and participants were asked to judge whether these two images belonged to the same basic-level category (e.g., street–street pairing) or not (e.g., street–highway pairing). In the concurrent working memory task, spatial load (position-based load in Experiment 1) and object load (figure-based load in Experiment 2) were manipulated. The results were as follows: (a) spatial load and object load have stronger effects on discrimination of same basic-level scene pairing than same superordinate level scene pairing; (b) spatial load has a larger impact on the discrimination of scene pairings at early stages than at later stages; on the contrary, object information has a larger influence on at later stages than at early stages. It followed that superordinate level processing has priority over basic-level processing in scene gist recognition and spatial information contributes to the earlier and object information to the later stages in scene gist recognition. PMID:28382195

  19. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  20. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  1. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  2. Motor-visual neurons and action recognition in social interactions.

    PubMed

    de la Rosa, Stephan; Bülthoff, Heinrich H

    2014-04-01

    Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions - namely, context-specific and contingency based learning.

  3. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  4. Parietal lobe critically supports successful paired immediate and single-item delayed memory for targets.

    PubMed

    Krumm, Sabine; Kivisaari, Sasa L; Monsch, Andreas U; Reinhardt, Julia; Ulmer, Stephan; Stippich, Christoph; Kressig, Reto W; Taylor, Kirsten I

    2017-05-01

    The parietal lobe is important for successful recognition memory, but its role is not yet fully understood. We investigated the parietal lobes' contribution to immediate paired-associate memory and delayed item-recognition memory separately for hits (targets) and correct rejections (distractors). We compared the behavioral performance of 56 patients with known parietal and medial temporal lobe dysfunction (i.e. early Alzheimer's Disease) to 56 healthy control participants in an immediate paired and delayed single item object memory task. Additionally, we performed voxel-based morphometry analyses to investigate the functional-neuroanatomic relationships between performance and voxel-based estimates of atrophy in whole-brain analyses. Behaviorally, all participants performed better identifying targets than rejecting distractors. The voxel-based morphometry analyses associated atrophy in the right ventral parietal cortex with fewer correct responses to familiar items (i.e. hits) in the immediate and delayed conditions. Additionally, medial temporal lobe integrity correlated with better performance in rejecting distractors, but not in identifying targets, in the immediate paired-associate task. Our findings suggest that the parietal lobe critically supports successful immediate and delayed target recognition memory, and that the ventral aspect of the parietal cortex and the medial temporal lobe may have complementary preferences for identifying targets and rejecting distractors, respectively, during recognition memory. Copyright © 2017. Published by Elsevier Inc.

  5. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  6. Visual agnosia and focal brain injury.

    PubMed

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  7. Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong

    2011-01-01

    Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during…

  8. Gender in Facial Representations: A Contrast-Based Study of Adaptation within and between the Sexes

    PubMed Central

    Oruç, Ipek; Guo, Xiaoyue M.; Barton, Jason J. S.

    2011-01-01

    Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space. PMID:21267414

  9. Semantic relations differentially impact associative recognition memory: electrophysiological evidence.

    PubMed

    Kriukova, Olga; Bridger, Emma; Mecklinger, Axel

    2013-10-01

    Though associative recognition memory is thought to rely primarily on recollection, recent research indicates that familiarity might also make a substantial contribution when to-be-learned items are integrated into a coherent structure by means of an existing semantic relation. It remains unclear how different types of semantic relations, such as categorical (e.g., dancer-singer) and thematic (e.g., dancer-stage) relations might affect associative recognition, however. Using event-related potentials (ERPs), we addressed this question by manipulating the type of semantic link between paired words in an associative recognition memory experiment. An early midfrontal old/new effect, typically linked to familiarity, was observed across the relation types. In contrast, a robust left parietal old/new effect was found in the categorical condition only, suggesting a clear contribution of recollection to associative recognition for this kind of pairs. One interpretation of this pattern is that familiarity was sufficiently diagnostic for associative recognition of thematic relations, which could result from the integrative nature of the thematic relatedness compared to the similarity-based nature of categorical pairs. The present study suggests that the extent to which recollection and familiarity are involved in associative recognition is at least in part determined by the properties of semantic relations between the paired associates. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. A process-based approach to characterizing the effect of acute alprazolam challenge on visual paired associate learning and memory in healthy older adults.

    PubMed

    Pietrzak, Robert H; Scott, James Cobb; Harel, Brian T; Lim, Yen Ying; Snyder, Peter J; Maruff, Paul

    2012-11-01

    Alprazolam is a benzodiazepine that, when administered acutely, results in impairments in several aspects of cognition, including attention, learning, and memory. However, the profile (i.e., component processes) that underlie alprazolam-related decrements in visual paired associate learning has not been fully explored. In this double-blind, placebo-controlled, randomized cross-over study of healthy older adults, we used a novel, "process-based" computerized measure of visual paired associate learning to examine the effect of a single, acute 1-mg dose of alprazolam on component processes of visual paired associate learning and memory. Acute alprazolam challenge was associated with a large magnitude reduction in visual paired associate learning and memory performance (d = 1.05). Process-based analyses revealed significant increases in distractor, exploratory, between-search, and within-search error types. Analyses of percentages of each error type suggested that, relative to placebo, alprazolam challenge resulted in a decrease in the percentage of exploratory errors and an increase in the percentage of distractor errors, both of which reflect memory processes. Results of this study suggest that acute alprazolam challenge decreases visual paired associate learning and memory performance by reducing the strength of the association between pattern and location, which may reflect a general breakdown in memory consolidation, with less evidence of reductions in executive processes (e.g., working memory) that facilitate visual paired associate learning and memory. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Hazardous sign detection for safety applications in traffic monitoring

    NASA Astrophysics Data System (ADS)

    Benesova, Wanda; Kottman, Michal; Sidla, Oliver

    2012-01-01

    The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.

  12. One-Reason Decision Making Unveiled: A Measurement Model of the Recognition Heuristic

    ERIC Educational Resources Information Center

    Hilbig, Benjamin E.; Erdfelder, Edgar; Pohl, Rudiger F.

    2010-01-01

    The fast-and-frugal recognition heuristic (RH) theory provides a precise process description of comparative judgments. It claims that, in suitable domains, judgments between pairs of objects are based on recognition alone, whereas further knowledge is ignored. However, due to the confound between recognition and further knowledge, previous…

  13. Real-time unconstrained object recognition: a processing pipeline based on the mammalian visual system.

    PubMed

    Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B

    2012-03-01

    The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.

  14. Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.

    PubMed

    Brown, Charity; Lloyd-Jones, Toby J

    2006-03-01

    We examined the effect of verbally describing faces upon visual memory. In particular, we examined the locus of the facilitative effects of verbalization by manipulating the visual distinctiveness ofthe to-be-remembered faces and using the remember/know procedure as a measure of recognition performance (i.e., remember vs. know judgments). Participants were exposed to distinctive faces intermixed with typical faces and described (or not, in the control condition) each face following its presentation. Subsequently, the participants discriminated the original faces from distinctive and typical distractors in a yes/no recognition decision and made remember/know judgments. Distinctive faces elicited better discrimination performance than did typical faces. Furthermore, for both typical and distinctive faces, better discrimination performance was obtained in the description than in the control condition. Finally, these effects were evident for both recollection- and familiarity-based recognition decisions. We argue that verbalization and visual distinctiveness independently benefit face recognition, and we discuss these findings in terms of the nature of verbalization and the role of recollective and familiarity-based processes in recognition.

  15. Self-replication of chemical systems based on recognition within a double or a triple helix - A realistic hypothesis

    NASA Technical Reports Server (NTRS)

    Kanavarioti, Anastassia

    1992-01-01

    A scenario is proposed for the non-enzymatic self-replication of short RNA molecules. The self-replication of an oligopyrimidine strand is considered and the process of template-directed synthesis based on recognition within a double helix is discussed. Replication mechanisms are suggested for selected oligonucleotides. The mechanisms are based on Watson-Crick base pairing between complementary nucleotides as well as Hoogsteen base pairing between a duplex and the complementary third strand. It is suggested that self-replication based on these mechanisms may be accomplished but may result in a substantial amount of misinformation transfer when mixed oligonucleotides are used.

  16. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    PubMed

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  18. Recognition Decisions from Visual Working Memory Are Mediated by Continuous Latent Strengths

    ERIC Educational Resources Information Center

    Ricker, Timothy J.; Thiele, Jonathan E.; Swagman, April R.; Rouder, Jeffrey N.

    2017-01-01

    Making recognition decisions often requires us to reference the contents of working memory, the information available for ongoing cognitive processing. As such, understanding how recognition decisions are made when based on the contents of working memory is of critical importance. In this work we examine whether recognition decisions based on the…

  19. Orthographic similarity: the case of "reversed anagrams".

    PubMed

    Morris, Alison L; Still, Mary L

    2012-07-01

    How orthographically similar are words such as paws and swap, flow and wolf, or live and evil? According to the letter position coding schemes used in models of visual word recognition, these reversed anagrams are considered to be less similar than words that share letters in the same absolute or relative positions (such as home and hose or plan and lane). Therefore, reversed anagrams should not produce the standard orthographic similarity effects found using substitution neighbors (e.g., home, hose). Simulations using the spatial coding model (Davis, Psychological Review 117, 713-758, 2010), for example, predict an inhibitory masked-priming effect for substitution neighbor word pairs but a null effect for reversed anagrams. Nevertheless, we obtained significant inhibitory priming using both stimulus types (Experiment 1). We also demonstrated that robust repetition blindness can be obtained for reversed anagrams (Experiment 2). Reversed anagrams therefore provide a new test for models of visual word recognition and orthographic similarity.

  20. Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory

    PubMed Central

    Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong

    2010-01-01

    Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833

  1. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  2. 1,8-Naphthyridine-2,7-diamine: a potential universal reader of Watson-Crick base pairs for DNA sequencing by electron tunneling.

    PubMed

    Liang, Feng; Lindsay, Stuart; Zhang, Peiming

    2012-11-21

    With the aid of Density Functional Theory (DFT), we designed 1,8-naphthyridine-2,7-diamine as a recognition molecule to read DNA base pairs for genomic sequencing by electron tunneling. NMR studies show that it can form stable triplets with both A : T and G : C base pairs through hydrogen bonding. Our results suggest that the naphthyridine molecule should be able to function as a universal base pair reader in a tunneling gap, generating distinguishable signatures under electrical bias for each of DNA base pairs.

  3. 1,8-Naphthyridine-2,7-diamine: A Potential Universal Reader of the Watson-Crick Base Pairs for DNA Sequencing by Electron Tunneling

    PubMed Central

    Liang, Feng; Lindsay, Stuart; Zhang, Peiming

    2013-01-01

    With the aid of Density Functional Theory (DFT), we designed 1,8-naphthyridine-2,7-diamine as a recognition molecule to read the DNA base pairs for genomic sequencing by electron tunneling. NMR studies show that it can form stable triplets with both A:T and G:C base pairs through hydrogen bonding. Our results suggest that the naphthyridine molecule should be able to function as a universal base pair reader in a tunneling gap, generating distinguishable signatures under electrical bias for each of DNA base pairs. PMID:23038027

  4. [Visual Texture Agnosia in Humans].

    PubMed

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  5. Hippocampal Functioning and Verbal Associative Memory in Adolescents with Congenital Hypothyroidism

    PubMed Central

    Wheeler, Sarah M.; McLelland, Victoria C.; Sheard, Erin; McAndrews, Mary Pat; Rovet, Joanne F.

    2015-01-01

    Thyroid hormone (TH) is essential for normal development of the hippocampus, which is critical for memory and particularly for learning and recalling associations between visual and verbal stimuli. Adolescents with congenital hypothyroidism (CH), who lack TH in late gestation and early life, demonstrate weak verbal recall abilities, reduced hippocampal volumes, and abnormal hippocampal functioning for visually associated material. However, it is not known if their hippocampus functions abnormally when remembering verbal associations. Our objective was to assess hippocampal functioning in CH using functional magnetic resonance imaging (fMRI). Fourteen adolescents with CH and 14 typically developing controls (TDC) were studied. Participants studied pairs of words and then, during fMRI acquisition, made two types of recognition decisions: in one they judged whether the pairs were the same as when seen originally and in the other, whether individual words were seen before regardless of pairing. Hippocampal activation was greater for pairs than items in both groups, but this difference was only significant in TDC. When we directly compared the groups, the right anterior hippocampus was the primary region in which the TDC and CH groups differed for this pair memory effect. Results signify that adolescents with CH show abnormal hippocampal functioning during verbal memory processing. PMID:26539162

  6. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  7. Introducing memory and association mechanism into a biologically inspired visual model.

    PubMed

    Qiao, Hong; Li, Yinlin; Tang, Tang; Wang, Peng

    2014-09-01

    A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.

  8. Novelty preference in patients with developmental amnesia.

    PubMed

    Munoz, M; Chadwick, M; Perez-Hernandez, E; Vargha-Khadem, F; Mishkin, M

    2011-12-01

    To re-examine whether or not selective hippocampal damage reduces novelty preference in visual paired comparison (VPC), we presented two different versions of the task to a group of patients with developmental amnesia (DA), each of whom sustained this form of pathology early in life. Compared with normal control participants, the DA group showed a delay-dependent reduction in novelty preference on one version of the task and an overall reduction on both versions combined. Because VPC is widely considered to be a measure of incidental recognition, the results appear to support the view that the hippocampus contributes to recognition memory. A difficulty for this conclusion, however, is that according to one current view the hippocampal contribution to recognition is limited to task conditions that encourage recollection of an item in some associated context, and according to another current view, to recognition of an item with the high confidence judgment that reflects a strong memory. By contrast, VPC, throughout which the participant remains entirely uninstructed other than to view the stimuli, would seem to lack such task conditions and so would likely lead to recognition based on familiarity rather than recollection or, alternatively, weak memories rather than strong. However, before concluding that the VPC impairment therefore contradicts both current views regarding the role of the hippocampus in recognition memory, two possibilities that would resolve this issue need to be investigated. One is that some variable in VPC, such as the extended period of stimulus encoding during familiarization, overrides its incidental nature, and, because this condition promotes either recollection- or strength-based recognition, renders the task hippocampal-dependent. The other possibility is that VPC, rather than providing a measure of incidental recognition, actually assesses an implicit, information-gathering process modulated by habituation, for which the hippocampus is also partly responsible, independent of its role in recognition. Copyright © 2010 Wiley Periodicals, Inc.

  9. Myotonic Dystrophy Type 1 RNA Crystal Structures Reveal Heterogeneous 1×1 Nucleotide UU Internal Loop Conformations⊥

    PubMed Central

    Kumar, Amit; Park, HaJeung; Fang, Pengfei; Parkesh, Raman; Guo, Min; Nettles, Kendall W.; Disney, Matthew D.

    2011-01-01

    RNA internal loops often display a variety of conformations in solution. Herein, we visualize conformational heterogeneity in the context of the 5′CUG/3′GUC repeat motif present in the RNA that causes myotonic dystrophy type 1 (DM1). Specifically, two crystal structures are disclosed of a model DM1 triplet repeating construct, 5′r(UUGGGC(CUG)3GUCC)2, refined to 2.20 Å and 1.52 Å resolution. Here, differences in orientation of the 5′ dangling UU end between the two structures induce changes in the backbone groove width, which reveals that non-canonical 1×1 nucleotide UU internal loops can display an ensemble of pairing conformations. In the 2.20 Å structure, CUGa, the 5′UU forms one hydrogen-bonded pairs with a 5′UU of a neighboring helix in the unit cell to form a pseudo-infinite helix. The central 1×1 nucleotide UU internal loop has no hydrogen bonds, while the terminal 1×1 nucleotide UU internal loops each form a one hydrogen-bonded pair. In the 1.52 Å structure, CUGb, the 5′ UU dangling end is tucked into the major groove of the duplex. While the canonical paired bases show no change in base pairing, in CUGb the terminal 1×1 nucleotide UU internal loops form now two hydrogen-bonded pairs. Thus, the shift in major groove induced by the 5′UU dangling end alters non-canonical base patterns. Collectively, these structures indicate that 1×1 nucleotide UU internal loops in DM1 may sample multiple conformations in vivo. This observation has implications for the recognition of this RNA, and other repeating transcripts, by protein and small molecule ligands. PMID:21988728

  10. Myotonic dystrophy type 1 RNA crystal structures reveal heterogeneous 1 × 1 nucleotide UU internal loop conformations.

    PubMed

    Kumar, Amit; Park, HaJeung; Fang, Pengfei; Parkesh, Raman; Guo, Min; Nettles, Kendall W; Disney, Matthew D

    2011-11-15

    RNA internal loops often display a variety of conformations in solution. Herein, we visualize conformational heterogeneity in the context of the 5'CUG/3'GUC repeat motif present in the RNA that causes myotonic dystrophy type 1 (DM1). Specifically, two crystal structures of a model DM1 triplet repeating construct, 5'r[UUGGGC(CUG)(3)GUCC](2), refined to 2.20 and 1.52 Å resolution are disclosed. Here, differences in the orientation of the 5' dangling UU end between the two structures induce changes in the backbone groove width, which reveals that noncanonical 1 × 1 nucleotide UU internal loops can display an ensemble of pairing conformations. In the 2.20 Å structure, CUGa, the 5' UU forms a one hydrogen-bonded pair with a 5' UU of a neighboring helix in the unit cell to form a pseudoinfinite helix. The central 1 × 1 nucleotide UU internal loop has no hydrogen bonds, while the terminal 1 × 1 nucleotide UU internal loops each form a one-hydrogen bond pair. In the 1.52 Å structure, CUGb, the 5' UU dangling end is tucked into the major groove of the duplex. While the canonically paired bases show no change in base pairing, in CUGb the terminal 1 × 1 nucleotide UU internal loops now form two hydrogen-bonded pairs. Thus, the shift in the major groove induced by the 5' UU dangling end alters noncanonical base patterns. Collectively, these structures indicate that 1 × 1 nucleotide UU internal loops in DM1 may sample multiple conformations in vivo. This observation has implications for the recognition of this RNA, and other repeating transcripts, by protein and small molecule ligands.

  11. Myotonic Dystrophy Type 1 RNA Crystal Structures Reveal Heterogeneous 1 × 1 Nucleotide UU Internal Loop Conformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Amit; Park, HaJeung; Fang, Pengfei

    2012-03-27

    RNA internal loops often display a variety of conformations in solution. Herein, we visualize conformational heterogeneity in the context of the 5'CUG/3'GUC repeat motif present in the RNA that causes myotonic dystrophy type 1 (DM1). Specifically, two crystal structures of a model DM1 triplet repeating construct, 5'r[{und UU}GGGC(C{und U}G){sub 3}GUCC]{sub 2}, refined to 2.20 and 1.52 {angstrom} resolution are disclosed. Here, differences in the orientation of the 5' dangling UU end between the two structures induce changes in the backbone groove width, which reveals that noncanonical 1 x 1 nucleotide UU internal loops can display an ensemble of pairing conformations.more » In the 2.20 {angstrom} structure, CUGa, the 5' UU forms a one hydrogen-bonded pair with a 5' UU of a neighboring helix in the unit cell to form a pseudoinfinite helix. The central 1 x 1 nucleotide UU internal loop has no hydrogen bonds, while the terminal 1 x 1 nucleotide UU internal loops each form a one-hydrogen bond pair. In the 1.52 {angstrom} structure, CUGb, the 5' UU dangling end is tucked into the major groove of the duplex. While the canonically paired bases show no change in base pairing, in CUGb the terminal 1 x 1 nucleotide UU internal loops now form two hydrogen-bonded pairs. Thus, the shift in the major groove induced by the 5' UU dangling end alters noncanonical base patterns. Collectively, these structures indicate that 1 x 1 nucleotide UU internal loops in DM1 may sample multiple conformations in vivo. This observation has implications for the recognition of this RNA, and other repeating transcripts, by protein and small molecule ligands.« less

  12. Examination of soldier target recognition with direct view optics

    NASA Astrophysics Data System (ADS)

    Long, Frederick H.; Larkin, Gabriella; Bisordi, Danielle; Dorsey, Shauna; Marianucci, Damien; Goss, Lashawnta; Bastawros, Michael; Misiuda, Paul; Rodgers, Glenn; Mazz, John P.

    2017-10-01

    Target recognition and identification is a problem of great military and scientific importance. To examine the correlation between target recognition and optical magnification, ten U.S. Army soldiers were tasked with identifying letters on targets at 800 and 1300 meters away. Letters were used since they are a standard method for measuring visual acuity. The letters were approximately 90 cm high, which is the size of a well-known rifle. Four direct view optics with angular magnifications of 1.5x, 4x, 6x, and 9x were used. The goal of this approach was to measure actual probabilities for correct target identification. Previous scientific literature suggests that target recognition can be modeled as a linear response problem in angular frequency space using the established values for the contrast sensitivity function for a healthy human eye and the experimentally measured modulation transfer function of the optic. At the 9x magnification, the soldiers could identify the letters with almost no errors (i.e., 97% probability of correct identification). At lower magnification, errors in letter identification were more frequent. The identification errors were not random but occurred most frequently with a few pairs of letters (e.g., O and Q), which is consistent with the literature for letter recognition. In addition, in the small subject sample of ten soldiers, there was considerable variation in the observer recognition capability at 1.5x and a range of 800 meters. This can be directly attributed to the variation in the observer visual acuity.

  13. An Evaluation of PC-Based Optical Character Recognition Systems.

    ERIC Educational Resources Information Center

    Schreier, E. M.; Uslan, M. M.

    1991-01-01

    The review examines six personal computer-based optical character recognition (OCR) systems designed for use by blind and visually impaired people. Considered are OCR components and terms, documentation, scanning and reading, command structure, conversion, unique features, accuracy of recognition, scanning time, speed, and cost. (DB)

  14. Four base recognition by triplex-forming oligonucleotides at physiological pH

    PubMed Central

    Rusling, David A.; Powers, Vicki E. C.; Ranasinghe, Rohan T.; Wang, Yang; Osborne, Sadie D.; Brown, Tom; Fox, Keith R.

    2005-01-01

    We have achieved recognition of all 4 bp by triple helix formation at physiological pH, using triplex-forming oligonucleotides that contain four different synthetic nucleotides. BAU [2′-aminoethoxy-5-(3-aminoprop-1-ynyl)uridine] recognizes AT base pairs with high affinity, MeP (3-methyl-2 aminopyridine) binds to GC at higher pHs than cytosine, while APP (6-(3-aminopropyl)-7-methyl-3H-pyrrolo[2,3-d]pyrimidin-2(7H)-one) and S [N-(4-(3-acetamidophenyl)thiazol-2-yl-acetamide)] bind to CG and TA base pairs, respectively. Fluorescence melting and DNase I footprinting demonstrate successful triplex formation at a 19mer oligopurine sequence that contains two CG and two TA interruptions. The complexes are pH dependent, but are still stable at pH 7.0. BAU, MeP and APP retain considerable selectivity, and single base pair changes opposite these residues cause a large reduction in affinity. In contrast, S is less selective and tolerates CG pairs as well as TA. PMID:15911633

  15. Recognition-induced forgetting is not due to category-based set size.

    PubMed

    Maxcey, Ashleigh M

    2016-01-01

    What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.

  16. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  17. State Recognition and Visualization of Hoisting Motor of Quayside Container Crane Based on SOFM

    NASA Astrophysics Data System (ADS)

    Yang, Z. Q.; He, P.; Tang, G.; Hu, X.

    2017-07-01

    The neural network structure and algorithm of self-organizing feature map (SOFM) are researched and analysed. The method is applied to state recognition and visualization of the quayside container crane hoisting motor. By using SOFM, the clustering and visualization of attribute reduction of data are carried out, and three kinds motor states are obtained with Root Mean Square(RMS), Impulse Index and Margin Index, and the simulation visualization interface is realized by MATLAB. Through the processing of the sample data, it can realize the accurate identification of the motor state, thus provide better monitoring of the quayside container crane hoisting motor and a new way for the mechanical state recognition.

  18. Right hemisphere advantage for social recognition in the chick.

    PubMed

    Vallortigara, G

    1992-09-01

    Recognition of familiar and unfamiliar conspecifics was studied in pair-reared chicks tested binocularly or with only one eye in use. Chicks were tested on day 3 in pairs composed of either cagemates or strangers. Social discrimination, as measured by the ratio "number of pecks at the strangers/total number of pecks" was impaired in right-eyed chicks with respect to left-eyed and binocular chicks. Male chicks showed higher levels of social pecking than females, and chicks that used both eyes showed higher pecking than monocular chicks. There were no significant differences in the total number of pecks (i.e. pecks at companions plus pecks at strangers) between right- and left-eyed chicks: the impairment in social discrimination of right-eyed chicks seemed to be due partly to a reduction in pecking at strangers and partly to an increase in pecking at companions. It is suggested that neural structures fed by the left eye (mainly located at the right hemisphere) are better at processing and/or storing of visual information which allows recognition of individual conspecifics. This may be part of a wider tendency to respond to small changes in any of a variety of intrinsic stimulus properties.

  19. Contextual consistency facilitates long-term memory of perceptual detail in barely seen images.

    PubMed

    Gronau, Nurit; Shachar, Meytal

    2015-08-01

    It is long known that contextual information affects memory for an object's identity (e.g., its basic level category), yet it is unclear whether schematic knowledge additionally enhances memory for the precise visual appearance of an item. Here we investigated memory for visual detail of merely glimpsed objects. Participants viewed pairs of contextually related and unrelated stimuli, presented for an extremely brief duration (24 ms, masked). They then performed a forced-choice memory-recognition test for the precise perceptual appearance of 1 of 2 objects within each pair (i.e., the "memory-target" item). In 3 experiments, we show that memory-target stimuli originally appearing within contextually related pairs are remembered better than targets appearing within unrelated pairs. These effects are obtained whether the target is presented at test with its counterpart pair object (i.e., when reiterating the original context at encoding) or whether the target is presented alone, implying that the contextual consistency effects are mediated predominantly by processes occurring during stimulus encoding, rather than during stimulus retrieval. Furthermore, visual detail encoding is improved whether object relations involve implied action or not, suggesting that, contrary to some prior suggestions, action is not a necessary component for object-to-object associative "grouping" processes. Our findings suggest that during a brief glimpse, but not under long viewing conditions, contextual associations may play a critical role in reducing stimulus competition for attention selection and in facilitating rapid encoding of sensory details. Theoretical implications with respect to classic frame theories are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  20. Two processes support visual recognition memory in rhesus monkeys.

    PubMed

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-11-29

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.

  1. Two processes support visual recognition memory in rhesus monkeys

    PubMed Central

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-01-01

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans. PMID:22084079

  2. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation

    PubMed Central

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field. PMID:27853419

  3. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.

    PubMed

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.

  4. A challenging dissociation in masked identity priming with the lexical decision task.

    PubMed

    Perea, Manuel; Jiménez, María; Gómez, Pablo

    2014-05-01

    The masked priming technique has been used extensively to explore the early stages of visual-word recognition. One key phenomenon in masked priming lexical decision is that identity priming is robust for words, whereas it is small/unreliable for nonwords. This dissociation has usually been explained on the basis that masked priming effects are lexical in nature, and hence there should not be an identity prime facilitation for nonwords. We present two experiments whose results are at odds with the assumption made by models that postulate that identity priming is purely lexical, and also challenge the assumption that word and nonword responses are based on the same information. Our experiments revealed that for nonwords, but not for words, matched-case identity PRIME-TARGET pairs were responded to faster than mismatched-case identity prime-TARGET pairs, and this phenomenon was not modulated by the lowercase/uppercase feature similarity of the stimuli. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Visual associations to retrieve episodic memory across healthy elderly, mild cognitive impairment, and patients with Alzheimer's disease.

    PubMed

    Meyer, Sascha R A; De Jonghe, Jos F M; Schmand, Ben; Ponds, Rudolf W H M

    2018-05-16

    Episodic memory tests need to determine the degree to which patients with moderate to severe memory deficits can still benefit from retrieval support. Especially in the case of Alzheimer's disease (AD), this may support health care to be more closely aligned with patients' memory capacities. We investigated whether the different measures of episodic memory of the Visual Association Test-Extended (VAT-E) can provide a more detailed and informative assessment on memory disturbances across a broad range of cognitive decline, from normal to severe impairment as seen in AD, by examining differences in floor effects. The VAT-E consists of 24 pairs of black-and-white line drawings. In a within-group design, we compared score distributions of VAT-E subtests in healthy elderly controls, mild cognitive impairment (MCI), and AD (n = 144), as well as in relation to global cognitive impairment. Paired associate recall showed a floor effect in 41% of MCI patients and 62% of AD patients. Free recall showed a floor effect in 73% of MCI patients and 84% of AD patients. Multiple-choice cued recognition did not show a floor effect in either of the patient groups. We conclude that the VAT-E covers a broad range of episodic memory decline in patients. As expected, paired associate recall was of intermediate difficulty, free recall was most difficult, and multiple-choice cued recognition was least difficult for patients. These varying levels of difficulty enable a more accurate determination of the level of retrieval support that can still benefit patients across a broad range of cognitive decline.

  6. KlenTaq polymerase replicates unnatural base pairs by inducing a Watson-Crick geometry.

    PubMed

    Betz, Karin; Malyshev, Denis A; Lavergne, Thomas; Welte, Wolfram; Diederichs, Kay; Dwyer, Tammy J; Ordoukhanian, Phillip; Romesberg, Floyd E; Marx, Andreas

    2012-07-01

    Many candidate unnatural DNA base pairs have been developed, but some of the best-replicated pairs adopt intercalated structures in free DNA that are difficult to reconcile with known mechanisms of polymerase recognition. Here we present crystal structures of KlenTaq DNA polymerase at different stages of replication for one such pair, dNaM-d5SICS, and show that efficient replication results from the polymerase itself, inducing the required natural-like structure.

  7. The gender congruency effect during bilingual spoken-word recognition

    PubMed Central

    Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa

    2016-01-01

    We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132

  8. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  9. Widespread Transient Hoogsteen Base-Pairs in Canonical Duplex DNA with Variable Energetics

    PubMed Central

    Alvey, Heidi S.; Gottardo, Federico L.; Nikolova, Evgenia N.; Al-Hashimi, Hashim M.

    2015-01-01

    Hoogsteen base-pairing involves a 180 degree rotation of the purine base relative to Watson-Crick base-pairing within DNA duplexes, creating alternative DNA conformations that can play roles in recognition, damage induction, and replication. Here, using Nuclear Magnetic Resonance R1ρ relaxation dispersion, we show that transient Hoogsteen base-pairs occur across more diverse sequence and positional contexts than previously anticipated. We observe sequence-specific variations in Hoogsteen base-pair energetic stabilities that are comparable to variations in Watson-Crick base-pair stability, with Hoogsteen base-pairs being more abundant for energetically less favorable Watson-Crick base-pairs. Our results suggest that the variations in Hoogsteen stabilities and rates of formation are dominated by variations in Watson-Crick base pair stability, suggesting a late transition state for the Watson-Crick to Hoogsteen conformational switch. The occurrence of sequence and position-dependent Hoogsteen base-pairs provide a new potential mechanism for achieving sequence-dependent DNA transactions. PMID:25185517

  10. Recognition memory is modulated by visual similarity.

    PubMed

    Yago, Elena; Ishai, Alumit

    2006-06-01

    We used event-related fMRI to test whether recognition memory depends on visual similarity between familiar prototypes and novel exemplars. Subjects memorized portraits, landscapes, and abstract compositions by six painters with a unique style, and later performed a memory recognition task. The prototypes were presented with new exemplars that were either visually similar or dissimilar. Behaviorally, novel, dissimilar items were detected faster and more accurately. We found activation in a distributed cortical network that included face- and object-selective regions in the visual cortex, where familiar prototypes evoked stronger responses than new exemplars; attention-related regions in parietal cortex, where responses elicited by new exemplars were reduced with decreased similarity to the prototypes; and the hippocampus and memory-related regions in parietal and prefrontal cortices, where stronger responses were evoked by the dissimilar exemplars. Our findings suggest that recognition memory is mediated by classification of novel exemplars as a match or a mismatch, based on their visual similarity to familiar prototypes.

  11. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  12. Identification in a pseudoknot of a U.G motif essential for the regulation of the expression of ribosomal protein S15.

    PubMed

    Bénard, L; Mathy, N; Grunberg-Manago, M; Ehresmann, B; Ehresmann, C; Portier, C

    1998-03-03

    The ribosomal protein S15 from Escherichia coli binds to a pseudoknot in its own messenger. This interaction is an essential step in the mechanism of S15 translational autoregulation. In a previous study, a recognition determinant for S15 autoregulation, involving a U.G wobble pair, was located in the center of stem I of the pseudoknot. In this study, an extensive mutagenesis analysis has been conducted in and around this U.G pair by comparing the effects of these mutations on the expression level of S15. The results show that the U.G wobble pair cannot be substituted by A.G, C.A, A.C, G.U, or C.G without loss of the autocontrol. In addition, the base pair C.G, adjacent to the 5' side of U, cannot be flipped or changed to another complementary base pair without also inducing derepression of translation. A unique motif, made of only two adjacent base pairs, U.G/C.G, is essential for S15 autoregulation and is presumably involved in direct recognition by the S15 protein.

  13. Identification in a pseudoknot of a U⋅G motif essential for the regulation of the expression of ribosomal protein S15

    PubMed Central

    Bénard, Lionel; Mathy, Nathalie; Grunberg-Manago, Marianne; Ehresmann, Bernard; Ehresmann, Chantal; Portier, Claude

    1998-01-01

    The ribosomal protein S15 from Escherichia coli binds to a pseudoknot in its own messenger. This interaction is an essential step in the mechanism of S15 translational autoregulation. In a previous study, a recognition determinant for S15 autoregulation, involving a U⋅G wobble pair, was located in the center of stem I of the pseudoknot. In this study, an extensive mutagenesis analysis has been conducted in and around this U⋅G pair by comparing the effects of these mutations on the expression level of S15. The results show that the U⋅G wobble pair cannot be substituted by A⋅G, C⋅A, A⋅C, G⋅U, or C⋅G without loss of the autocontrol. In addition, the base pair C⋅G, adjacent to the 5′ side of U, cannot be flipped or changed to another complementary base pair without also inducing derepression of translation. A unique motif, made of only two adjacent base pairs, U⋅G/C⋅G, is essential for S15 autoregulation and is presumably involved in direct recognition by the S15 protein. PMID:9482926

  14. Stringent Nucleotide Recognition by the Ribosome at the Middle Codon Position.

    PubMed

    Liu, Wei; Shin, Dongwon; Ng, Martin; Sanbonmatsu, Karissa Y; Tor, Yitzhak; Cooperman, Barry S

    2017-08-29

    Accurate translation of the genetic code depends on mRNA:tRNA codon:anticodon base pairing. Here we exploit an emissive, isosteric adenosine surrogate that allows direct measurement of the kinetics of codon:anticodon University of California base formation during protein synthesis. Our results suggest that codon:anticodon base pairing is subject to tighter constraints at the middle position than at the 5'- and 3'-positions, and further suggest a sequential mechanism of formation of the three base pairs in the codon:anticodon helix.

  15. Address entry while driving: speech recognition versus a touch-screen keyboard.

    PubMed

    Tsimhoni, Omer; Smith, Daniel; Green, Paul

    2004-01-01

    A driving simulator experiment was conducted to determine the effects of entering addresses into a navigation system during driving. Participants drove on roads of varying visual demand while entering addresses. Three address entry methods were explored: word-based speech recognition, character-based speech recognition, and typing on a touch-screen keyboard. For each method, vehicle control and task measures, glance timing, and subjective ratings were examined. During driving, word-based speech recognition yielded the shortest total task time (15.3 s), followed by character-based speech recognition (41.0 s) and touch-screen keyboard (86.0 s). The standard deviation of lateral position when performing keyboard entry (0.21 m) was 60% higher than that for all other address entry methods (0.13 m). Degradation of vehicle control associated with address entry using a touch screen suggests that the use of speech recognition is favorable. Speech recognition systems with visual feedback, however, even with excellent accuracy, are not without performance consequences. Applications of this research include the design of in-vehicle navigation systems as well as other systems requiring significant driver input, such as E-mail, the Internet, and text messaging.

  16. Structural Basis for the Lesion-scanning Mechanism of the MutY DNA Glycosylase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lan; Chakravarthy, Srinivas; Verdine, Gregory L.

    The highly mutagenic A:8-oxoguanine (oxoG) base pair is generated mainly by misreplication of the C:oxoG base pair, the oxidation product of the C:G base pair. The A:oxoG base pair is particularly insidious because neither base in it carries faithful information to direct the repair of the other. The bacterial MutY (MUTYH in humans) adenine DNA glycosylase is able to initiate the repair of A:oxoG by selectively cleaving the A base from the A:oxoG base pair. The difference between faithful repair and wreaking mutagenic havoc on the genome lies in the accurate discrimination between two structurally similar base pairs: A:oxoG andmore » A:T. Here we present two crystal structures of the MutY N-terminal domain in complex with either undamaged DNA or DNA containing an intrahelical lesion. These structures have captured for the first time a DNA glycosylase scanning the genome for a damaged base in the very first stage of lesion recognition and the base extrusion pathway. The mode of interaction observed here has suggested a common lesion-scanning mechanism across the entire helix-hairpin-helix superfamily to which MutY belongs. In addition, small angle X-ray scattering studies together with accompanying biochemical assays have suggested a possible role played by the C-terminal oxoG-recognition domain of MutY in lesion scanning.« less

  17. Array based Discovery of Aptamer Pairs (Open Access Publisher’s Version)

    DTIC Science & Technology

    2014-12-11

    Array-based Discovery of Aptamer Pairs Minseon Cho,†,‡ Seung Soo Oh,‡ Jeff Nie,§ Ron Stewart,§ Monte J. Radeke,⊥ Michael Eisenstein,†,‡ Peter J...bidentate” target recognition, with affinities greatly exceeding either monovalent component. DNA aptamers are especially well-suited for such...constructs, because they can be linked via standard synthesis techniques without requiring chemical conjugation. Unfortunately, aptamer pairs are difficult

  18. Trajectory Recognition as the Basis for Object Individuation: A Functional Model of Object File Instantiation and Object-Token Encoding

    PubMed Central

    Fields, Chris

    2011-01-01

    The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599

  19. Age Differences in Memory Retrieval Shift: Governed by Feeling-of-Knowing?

    PubMed Central

    Hertzog, Christopher; Touron, Dayna R.

    2010-01-01

    The noun-pair lookup (NP) task was used to evaluate strategic shift from visual scanning to retrieval. We investigated whether age differences in feeling-of-knowing (FOK) account for older adults' delayed retrieval shift. Participants were randomly assigned to one of three conditions: (1) standard NP learning, (2) fast binary FOK judgments, or (3) Choice, where participants had to choose in advance whether to see the look-up table or respond from memory. We found small age differences in FOK magnitudes, but major age differences in memory retrieval choices that mirrored retrieval use in the standard NP task. Older adults showed lower resolution in their confidence judgments (CJs) for recognition memory tests on the NP items, and this difference appeared to influence rates of retrieval shift, given that retrieval use was correlated with CJ magnitudes in both age groups. Older adults had particular difficulty with accuracy and confidence for rearranged pairs, relative to intact pairs. Older adults' slowed retrieval shift appears to be due to (a) impaired associative learning early in practice, not just a lower FOK; but also (b) retrieval reluctance later in practice after the degree of associative learning would afford memory-based responding. PMID:21401263

  20. Dichotic and dichoptic digit perception in normal adults.

    PubMed

    Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T

    2011-06-01

    Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a greater extent than in the visual modality. The right-ear advantage observed in the dichotic-digits task was most evident when reproduction mediated response selection was used in conjunction with three-digit pairs. This effect implies that factors such as "speech related output mechanisms" and digit-span length (working memory) contribute to laterality effects in dichotic listening performance with traditional paradigms. Thus, the use of multiple-digit pairs to avoid ceiling effects and the application of verbal reproduction as a means of response selection may accentuate the role of nonperceptual factors in performance. Ideally, tests of perceptual abilities should be relatively free of such effects. American Academy of Audiology.

  1. The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Sainz, Javier; Illera, Víctor

    2015-01-01

    In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…

  2. Structural basis of DNA folding and recognition in an AMP-DNA aptamer complex: distinct architectures but common recognition motifs for DNA and RNA aptamers complexed to AMP.

    PubMed

    Lin, C H; Patel, D J

    1997-11-01

    Structural studies by nuclear magnetic resonance (NMR) of RNA and DNA aptamer complexes identified through in vitro selection and amplification have provided a wealth of information on RNA and DNA tertiary structure and molecular recognition in solution. The RNA and DNA aptamers that target ATP (and AMP) with micromolar affinity exhibit distinct binding site sequences and secondary structures. We report below on the tertiary structure of the AMP-DNA aptamer complex in solution and compare it with the previously reported tertiary structure of the AMP-RNA aptamer complex in solution. The solution structure of the AMP-DNA aptamer complex shows, surprisingly, that two AMP molecules are intercalated at adjacent sites within a rectangular widened minor groove. Complex formation involves adaptive binding where the asymmetric internal bubble of the free DNA aptamer zippers up through formation of a continuous six-base mismatch segment which includes a pair of adjacent three-base platforms. The AMP molecules pair through their Watson-Crick edges with the minor groove edges of guanine residues. These recognition G.A mismatches are flanked by sheared G.A and reversed Hoogsteen G.G mismatch pairs. The AMP-DNA aptamer and AMP-RNA aptamer complexes have distinct tertiary structures and binding stoichiometries. Nevertheless, both complexes have similar structural features and recognition alignments in their binding pockets. Specifically, AMP targets both DNA and RNA aptamers by intercalating between purine bases and through identical G.A mismatch formation. The recognition G.A mismatch stacks with a reversed Hoogsteen G.G mismatch in one direction and with an adenine base in the other direction in both complexes. It is striking that DNA and RNA aptamers selected independently from libraries of 10(14) molecules in each case utilize identical mismatch alignments for molecular recognition with micromolar affinity within binding-site pockets containing common structural elements.

  3. A selection of giant radio sources from NVSS

    DOE PAGES

    Proctor, D. D.

    2016-06-01

    Results of the application of pattern-recognition techniques to the problem of identifying giant radio sources (GRSs) from the data in the NVSS catalog are presented, and issues affecting the process are explored. Decision-tree pattern-recognition software was applied to training-set source pairs developed from known NVSS large-angular-size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRSs for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation ofmore » $$20^{\\prime} $$ and a minimum component area of 1.87 square arcmin at the 1.4 mJy level. The importance of comparing the resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 ± 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher-ranked sources were visually screened, and a table of over 1600 candidates, including morphological annotation, is presented. These systems include doubles and triples, wide-angle tail and narrow-angle tail, S- or Z-shaped systems, and core-jets and resolved cores. In conclusion, while some resolved-lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.« less

  4. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  5. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  6. Associative recognition: a case of recall-to-reject processing.

    PubMed

    Rotello, C M; Heit, E

    2000-09-01

    Two-process accounts of recognition memory assume that memory judgments are based on both a rapidly available familiarity-based process and a slower, more accurate, recall-based mechanism. Past experiments on the time course of item recognition have not supported the recall-to-reject account of the second process, in which the retrieval of an old item is used to reject a similar foil (Rotello & Heit, 1999). In three new experiments, using analyses similar to those of Rotello and Heit, we found robust evidence for recall-to-reject processing in associative recognition, for word pairs, and for list-discrimination judgments. Put together, these results have implications for two-process accounts of recognition.

  7. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    PubMed

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition

    PubMed Central

    Craddock, Matt; Lawson, Rebecca

    2009-01-01

    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685

  9. Investigating an Application of Speech-to-Text Recognition: A Study on Visual Attention and Learning Behaviour

    ERIC Educational Resources Information Center

    Huang, Y-M.; Liu, C-J.; Shadiev, Rustam; Shen, M-H.; Hwang, W-Y.

    2015-01-01

    One major drawback of previous research on speech-to-text recognition (STR) is that most findings showing the effectiveness of STR for learning were based upon subjective evidence. Very few studies have used eye-tracking techniques to investigate visual attention of students on STR-generated text. Furthermore, not much attention was paid to…

  10. Distinguishing familiarity from fluency for the compound word pair effect in associative recognition.

    PubMed

    Ahmad, Fahad N; Hockley, William E

    2017-09-01

    We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.

  11. Seamless Tracing of Human Behavior Using Complementary Wearable and House-Embedded Sensors

    PubMed Central

    Augustyniak, Piotr; Smoleń, Magdalena; Mikrut, Zbigniew; Kańtoch, Eliasz

    2014-01-01

    This paper presents a multimodal system for seamless surveillance of elderly people in their living environment. The system uses simultaneously a wearable sensor network for each individual and premise-embedded sensors specific for each environment. The paper demonstrates the benefits of using complementary information from two types of mobility sensors: visual flow-based image analysis and an accelerometer-based wearable network. The paper provides results for indoor recognition of several elementary poses and outdoor recognition of complex movements. Instead of complete system description, particular attention was drawn to a polar histogram-based method of visual pose recognition, complementary use and synchronization of the data from wearable and premise-embedded networks and an automatic danger detection algorithm driven by two premise- and subject-related databases. The novelty of our approach also consists in feeding the databases with real-life recordings from the subject, and in using the dynamic time-warping algorithm for measurements of distance between actions represented as elementary poses in behavioral records. The main results of testing our method include: 95.5% accuracy of elementary pose recognition by the video system, 96.7% accuracy of elementary pose recognition by the accelerometer-based system, 98.9% accuracy of elementary pose recognition by the combined accelerometer and video-based system, and 80% accuracy of complex outdoor activity recognition by the accelerometer-based wearable system. PMID:24787640

  12. Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yu, Yongtao; Li, Jonathan; Wen, Chenglu; Guan, Haiyan; Luo, Huan; Wang, Cheng

    2016-03-01

    This paper presents a novel algorithm for detection and recognition of traffic signs in mobile laser scanning (MLS) data for intelligent transportation-related applications. The traffic sign detection task is accomplished based on 3-D point clouds by using bag-of-visual-phrases representations; whereas the recognition task is achieved based on 2-D images by using a Gaussian-Bernoulli deep Boltzmann machine-based hierarchical classifier. To exploit high-order feature encodings of feature regions, a deep Boltzmann machine-based feature encoder is constructed. For detecting traffic signs in 3-D point clouds, the proposed algorithm achieves an average recall, precision, quality, and F-score of 0.956, 0.946, 0.907, and 0.951, respectively, on the four selected MLS datasets. For on-image traffic sign recognition, a recognition accuracy of 97.54% is achieved by using the proposed hierarchical classifier. Comparative studies with the existing traffic sign detection and recognition methods demonstrate that our algorithm obtains promising, reliable, and high performance in both detecting traffic signs in 3-D point clouds and recognizing traffic signs on 2-D images.

  13. Eye-tracking the time-course of novel word learning and lexical competition in adults and children.

    PubMed

    Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G

    2017-04-01

    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.

  14. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  15. Stringent Nucleotide Recognition by the Ribosome at the Middle Codon Position

    PubMed Central

    Liu, Wei; Shin, Dongwon; Ng, Martin; Sanbonmatsu, Karissa Y.; Tor, Yitzhak; Cooperman, Barry S.

    2017-01-01

    Accurate translation of the genetic code depends on mRNA:tRNA codon:anticodon base pairing. Here we exploit an emissive, isosteric adenosine surrogate that allows direct measurement of the kinetics of codon:anticodon base formation during protein synthesis. Our results suggest that codon:anticodon base pairing is subject to tighter constraints at the middle position than at the 5′- and 3′-positions, and further suggest a sequential mechanism of formation of the three base pairs in the codon:anticodon helix. PMID:28850078

  16. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  17. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    PubMed

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Individual recognition between mother and infant bats (Myotis)

    NASA Technical Reports Server (NTRS)

    Turner, D.; Shaughnessy, A.; Gould, E.

    1972-01-01

    The recognition process and the basis for that recognition, in brown bats, between mother and infant are analyzed. Two parameters, ultrasonic communication and olfactory stimuli, are investigated. The test animals were not allowed any visual contact. It was concluded that individual recognition between mother and infant occurred. However, it could not be determined if the recognition was based on ultrasonic signals or olfactory stimuli.

  19. Modelling of DNA-protein recognition

    NASA Technical Reports Server (NTRS)

    Rein, R.; Garduno, R.; Colombano, S.; Nir, S.; Haydock, K.; Macelroy, R. D.

    1980-01-01

    Computer model-building procedures using stereochemical principles together with theoretical energy calculations appear to be, at this stage, the most promising route toward the elucidation of DNA-protein binding schemes and recognition principles. A review of models and bonding principles is conducted and approaches to modeling are considered, taking into account possible di-hydrogen-bonding schemes between a peptide and a base (or a base pair) of a double-stranded nucleic acid in the major groove, aspects of computer graphic modeling, and a search for isogeometric helices. The energetics of recognition complexes is discussed and several models for peptide DNA recognition are presented.

  20. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  1. Beyond sensory images: Object-based representation in the human ventral pathway

    PubMed Central

    Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.

    2004-01-01

    We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396

  2. A benefit of context reinstatement to recognition memory in aging: the role of familiarity processes.

    PubMed

    Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M

    2017-11-01

    Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.

  3. Improving associative memory in older adults with unitization.

    PubMed

    Ahmad, Fahad N; Fernandes, Myra; Hockley, William E

    2015-01-01

    We examined if unitization inherent preexperimentally could reduce the associative deficit in older adults. In Experiment 1, younger and older adults studied compound word (CW; e.g., store keeper) and noncompound word (NCW; e.g., needle birth) pairs. We found a reduction in the age-related associative deficit such that older but not younger adults showed a discrimination advantage for CW relative to NCW pairs on a yes-no associative recognition test. These results suggest that CW compared to NCW word pairs provide schematic support that older adults can use to improve their memory. In Experiment 2, reducing study time in younger adults decreased associative recognition performance, but did not produce a discrimination advantage for CW pairs. In Experiment 3, both older and younger adults showed a discrimination advantage for CW pairs on a two-alternative forced-choice recognition test, which encourages greater use of familiarity. These results suggest that test format influenced young adults' use of familiarity during associative recognition of unitized pairs, and that older adults rely more on familiarity than recollection for associative recognition. Unitization of preexperimental associations, as in CW pairs, can alleviate age-related associative deficits.

  4. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  5. Recognition tunneling measurement of the conductance of DNA bases embedded in self-assembled monolayers.

    PubMed

    Huang, Shuo; Chang, Shuai; He, Jin; Zhang, Peiming; Liang, Feng; Tuchband, Michael; Li, Shengqing; Lindsay, Stuart

    2010-12-09

    The DNA bases interact strongly with gold electrodes, complicating efforts to measure the tunneling conductance through hydrogen-bonded Watson Crick base pairs. When bases are embedded in a self-assembled alkane-thiol monolayer to minimize these interactions, new features appear in the tunneling data. These new features track the predictions of density-functional calculations quite well, suggesting that they reflect tunnel conductance through hydrogen-bonded base pairs.

  6. Recognition tunneling measurement of the conductance of DNA bases embedded in self-assembled monolayers

    PubMed Central

    Huang, Shuo; Chang, Shuai; He, Jin; Zhang, Peiming; Liang, Feng; Tuchband, Michael; Li, Shengqing; Lindsay, Stuart

    2010-01-01

    The DNA bases interact strongly with gold electrodes, complicating efforts to measure the tunneling conductance through hydrogen-bonded Watson Crick base pairs. When bases are embedded in a self-assembled alkane-thiol monolayer to minimize these interactions, new features appear in the tunneling data. These new features track the predictions of density-functional calculations quite well, suggesting that they reflect tunnel conductance through hydrogen-bonded base pairs. PMID:21197382

  7. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. The effects of alphabet and expertise on letter perception

    PubMed Central

    Wiley, Robert W.; Wilson, Colin; Rapp, Brenda

    2016-01-01

    Long-standing questions in human perception concern the nature of the visual features that underlie letter recognition and the extent to which the visual processing of letters is affected by differences in alphabets and levels of viewer expertise. We examined these issues in a novel approach using a same-different judgment task on pairs of letters from the Arabic alphabet with two participant groups—one with no prior exposure to Arabic and one with reading proficiency. Hierarchical clustering and linear mixed-effects modeling of reaction times and accuracy provide evidence that both the specific characteristics of the alphabet and observers’ previous experience with it affect how letters are perceived and visually processed. The findings of this research further our understanding of the multiple factors that affect letter perception and support the view of a visual system that dynamically adjusts its weighting of visual features as expert readers come to more efficiently and effectively discriminate the letters of the specific alphabet they are viewing. PMID:26913778

  9. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  10. Capturing specific abilities as a window into human individuality: the example of face recognition.

    PubMed

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  11. Learning and Recognition of Clothing Genres From Full-Body Images.

    PubMed

    Hidayati, Shintami C; You, Chuang-Wen; Cheng, Wen-Huang; Hua, Kai-Lung

    2018-05-01

    According to the theory of clothing design, the genres of clothes can be recognized based on a set of visually differentiable style elements, which exhibit salient features of visual appearance and reflect high-level fashion styles for better describing clothing genres. Instead of using less-discriminative low-level features or ambiguous keywords to identify clothing genres, we proposed a novel approach for automatically classifying clothing genres based on the visually differentiable style elements. A set of style elements, that are crucial for recognizing specific visual styles of clothing genres, were identified based on the clothing design theory. In addition, the corresponding salient visual features of each style element were identified and formulated with variables that can be computationally derived with various computer vision algorithms. To evaluate the performance of our algorithm, a dataset containing 3250 full-body shots crawled from popular online stores was built. Recognition results show that our proposed algorithms achieved promising overall precision, recall, and -score of 88.76%, 88.53%, and 88.64% for recognizing upperwear genres, and 88.21%, 88.17%, and 88.19% for recognizing lowerwear genres, respectively. The effectiveness of each style element and its visual features on recognizing clothing genres was demonstrated through a set of experiments involving different sets of style elements or features. In summary, our experimental results demonstrate the effectiveness of the proposed method in clothing genre recognition.

  12. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  13. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  14. Infant Visual Attention and Object Recognition

    PubMed Central

    Reynolds, Greg D.

    2015-01-01

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333

  15. Improving Mobile Phone Speech Recognition by Personalized Amplification: Application in People with Normal Hearing and Mild-to-Moderate Hearing Loss.

    PubMed

    Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew

    In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.

  16. Interactions between Visual Attention and Episodic Retrieval: Dissociable Contributions of Parietal Regions during Gist-Based False Recognition

    PubMed Central

    Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.

    2012-01-01

    SUMMARY The interaction between episodic retrieval and visual attention is relatively unexplored. Given that systems mediating attention and episodic memory appear to be segregated, and perhaps even in competition, it is unclear how visual attention is recruited during episodic retrieval. We investigated the recruitment of visual attention during the suppression of gist-based false recognition, the tendency to falsely recognize items that are similar to previously encountered items. Recruitment of visual attention was associated with activity in the dorsal attention network. The inferior parietal lobule, often implicated in episodic retrieval, tracked veridical retrieval of perceptual detail and showed reduced activity during the engagement of visual attention, consistent with a competitive relationship with the dorsal attention network. These findings suggest that the contribution of the parietal cortex to interactions between visual attention and episodic retrieval entails distinct systems that contribute to different components of the task while also suppressing each other. PMID:22998879

  17. [Symptoms and lesion localization in visual agnosia].

    PubMed

    Suzuki, Kyoko

    2004-11-01

    There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.

  18. Optimization of Visual Information Presentation for Visual Prosthesis.

    PubMed

    Guo, Fei; Yang, Yuan; Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  19. Optimization of Visual Information Presentation for Visual Prosthesis

    PubMed Central

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  20. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    PubMed

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  1. Support Vector Machine-based classification of protein folds using the structural properties of amino acid residues and amino acid residue pairs.

    PubMed

    Shamim, Mohammad Tabrez Anwar; Anwaruddin, Mohammad; Nagarajaram, H A

    2007-12-15

    Fold recognition is a key step in the protein structure discovery process, especially when traditional sequence comparison methods fail to yield convincing structural homologies. Although many methods have been developed for protein fold recognition, their accuracies remain low. This can be attributed to insufficient exploitation of fold discriminatory features. We have developed a new method for protein fold recognition using structural information of amino acid residues and amino acid residue pairs. Since protein fold recognition can be treated as a protein fold classification problem, we have developed a Support Vector Machine (SVM) based classifier approach that uses secondary structural state and solvent accessibility state frequencies of amino acids and amino acid pairs as feature vectors. Among the individual properties examined secondary structural state frequencies of amino acids gave an overall accuracy of 65.2% for fold discrimination, which is better than the accuracy by any method reported so far in the literature. Combination of secondary structural state frequencies with solvent accessibility state frequencies of amino acids and amino acid pairs further improved the fold discrimination accuracy to more than 70%, which is approximately 8% higher than the best available method. In this study we have also tested, for the first time, an all-together multi-class method known as Crammer and Singer method for protein fold classification. Our studies reveal that the three multi-class classification methods, namely one versus all, one versus one and Crammer and Singer method, yield similar predictions. Dataset and stand-alone program are available upon request.

  2. Visual recognition system of cherry picking robot based on Lab color model

    NASA Astrophysics Data System (ADS)

    Zhang, Qirong; Zuo, Jianjun; Yu, Tingzhong; Wang, Yan

    2017-12-01

    This paper designs a visual recognition system suitable for cherry picking. First, the system deals with the image using the vector median filter. And then it extracts a channel of Lab color model to divide the cherries and the background. The cherry contour was successfully fitted by the least square method, and the centroid and radius of the cherry were extracted. Finally, the cherry was successfully extracted.

  3. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  4. Superficial Priming in Episodic Recognition

    ERIC Educational Resources Information Center

    Dopkins, Stephen; Sargent, Jesse; Ngo, Catherine T.

    2010-01-01

    We explored the effect of superficial priming in episodic recognition and found it to be different from the effect of semantic priming in episodic recognition. Participants made recognition judgments to pairs of items, with each pair consisting of a prime item and a test item. Correct positive responses to the test item were impeded if the prime…

  5. The picture superiority effect in associative recognition.

    PubMed

    Hockley, William E

    2008-10-01

    The picture superiority effect has been well documented in tests of item recognition and recall. The present study shows that the picture superiority effect extends to associative recognition. In three experiments, students studied lists consisting of random pairs of concrete words and pairs of line drawings; then they discriminated between intact (old) and rearranged (new) pairs of words and pictures at test. The discrimination advantage for pictures over words was seen in a greater hit rate for intact picture pairs, but there was no difference in the false alarm rates for the two types of stimuli. That is, there was no mirror effect. The same pattern of results was found when the test pairs consisted of the verbal labels of the pictures shown at study (Experiment 4), indicating that the hit rate advantage for picture pairs represents an encoding benefit. The results have implications for theories of the picture superiority effect and models of associative recognition.

  6. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  7. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    ERIC Educational Resources Information Center

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  8. Intelligent form removal with character stroke preservation

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.

    1996-03-01

    A new technique for intelligent form removal has been developed along with a new method for evaluating its impact on optical character recognition (OCR). All the dominant lines in the image are automatically detected using the Hough line transform and intelligently erased while simultaneously preserving overlapping character strokes by computing line width statistics and keying off of certain visual cues. This new method of form removal operates on loosely defined zones with no image deskewing. Any field in which the writer is provided a horizontal line to enter a response can be processed by this method. Several examples of processed fields are provided, including a comparison of results between the new method and a commercially available forms removal package. Even if this new form removal method did not improve character recognition accuracy, it is still a significant improvement to the technology because the requirement of a priori knowledge of the form's geometric details has been greatly reduced. This relaxes the recognition system's dependence on rigid form design, printing, and reproduction by automatically detecting and removing some of the physical structures (lines) on the form. Using the National Institute of Standards and Technology (NIST) public domain form-based handprint recognition system, the technique was tested on a large number of fields containing randomly ordered handprinted lowercase alphabets, as these letters (especially those with descenders) frequently touch and extend through the line along which they are written. Preserving character strokes improves overall lowercase recognition performance by 3%, which is a net improvement, but a single performance number like this doesn't communicate how the recognition process was really influenced. There is expected to be trade- offs with the introduction of any new technique into a complex recognition system. To understand both the improvements and the trade-offs, a new analysis was designed to compare the statistical distributions of individual confusion pairs between two systems. As OCR technology continues to improve, sophisticated analyses like this are necessary to reduce the errors remaining in complex recognition problems.

  9. Differences in binding and monitoring mechanisms contribute to lifespan age differences in false memory.

    PubMed

    Fandakova, Yana; Shing, Yee Lee; Lindenberger, Ulman

    2013-10-01

    Based on a 2-component framework of episodic memory development across the lifespan (Shing & Lindenberger, 2011), we examined the contribution of memory-related binding and monitoring processes to false memory susceptibility in childhood and old age. We administered a repeated continuous recognition task to children (N = 20, 10-12 years), younger adults (N = 20, 20-27 years), and older adults (N = 21, 68-76 years). Participants saw the same set of unrelated word pairs in 3 consecutive runs and their task was to identify pair reoccurrences within runs. Across runs, correct detection of repeated pairs decreased in children only, whereas false recognition of lure pairs showed a greater increase in older adults than in children or younger adults. False recognition of rearranged pairs decreased across runs for all participants. This decrease was most pronounced in children, in particular for high-confidence memory errors. We conclude that memory binding mechanisms are sufficiently developed in children to facilitate memory monitoring and reduce false memory for associative information. In contrast, older adults show senescent impairments in both binding and monitoring mechanisms that both contribute to elevated illusory recollections in old age. We conclude that binding and monitoring processes during memory performance follow different developmental trajectories from childhood to old age.

  10. Infant visual attention and object recognition.

    PubMed

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Simplified biased random walk model for RecA-protein-mediated homology recognition offers rapid and accurate self-assembly of long linear arrays of binding sites

    NASA Astrophysics Data System (ADS)

    Kates-Harbeck, Julian; Tilloy, Antoine; Prentiss, Mara

    2013-07-01

    Inspired by RecA-protein-based homology recognition, we consider the pairing of two long linear arrays of binding sites. We propose a fully reversible, physically realizable biased random walk model for rapid and accurate self-assembly due to the spontaneous pairing of matching binding sites, where the statistics of the searched sample are included. In the model, there are two bound conformations, and the free energy for each conformation is a weakly nonlinear function of the number of contiguous matched bound sites.

  12. Examining the direct and indirect effects of visual-verbal paired associate learning on Chinese word reading.

    PubMed

    Georgiou, George; Liu, Cuina; Xu, Shiyang

    2017-08-01

    Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Performance of Language-Coordinated Collective Systems: A Study of Wine Recognition and Description

    PubMed Central

    Zubek, Julian; Denkiewicz, Michał; Dębska, Agnieszka; Radkowska, Alicja; Komorowska-Mach, Joanna; Litwin, Piotr; Stępień, Magdalena; Kucińska, Adrianna; Sitarska, Ewa; Komorowska, Krystyna; Fusaroli, Riccardo; Tylén, Kristian; Rączaszek-Leonardi, Joanna

    2016-01-01

    Most of our perceptions of and engagements with the world are shaped by our immersion in social interactions, cultural traditions, tools and linguistic categories. In this study we experimentally investigate the impact of two types of language-based coordination on the recognition and description of complex sensory stimuli: that of red wine. Participants were asked to taste, remember and successively recognize samples of wines within a larger set in a two-by-two experimental design: (1) either individually or in pairs, and (2) with or without the support of a sommelier card—a cultural linguistic tool designed for wine description. Both effectiveness of recognition and the kinds of errors in the four conditions were analyzed. While our experimental manipulations did not impact recognition accuracy, bias-variance decomposition of error revealed non-trivial differences in how participants solved the task. Pairs generally displayed reduced bias and increased variance compared to individuals, however the variance dropped significantly when they used the sommelier card. The effect of sommelier card reducing the variance was observed only in pairs, individuals did not seem to benefit from the cultural linguistic tool. Analysis of descriptions generated with the aid of sommelier cards shows that pairs were more coherent and discriminative than individuals. The findings are discussed in terms of global properties and dynamics of collective systems when constrained by different types of cultural practices. PMID:27729875

  14. Evidence for conformational capture mechanism for damage recognition by NER protein XPC/Rad4.

    NASA Astrophysics Data System (ADS)

    Chakraborty, Sagnik; Steinbach, Peter J.; Paul, Debamita; Min, Jung-Hyun; Ansari, Anjum

    Altered flexibility of damaged DNA sites is considered to play an important role in damage recognition by DNA repair proteins. Characterizing lesion-induced DNA dynamics has remained a challenge. We have combined ps-resolved fluorescence lifetime measurements with cytosine analog FRET pair uniquely sensitive to local unwinding/twisting to analyze DNA conformational distributions. This innovative approach maps out with unprecedented sensitivity the alternative conformations accessible to a series of DNA constructs containing 3-base-pair mismatch, suitable model lesions for the DNA repair protein xeroderma pigmentosum C (XPC) complex. XPC initiates eukaryotic nucleotide excision repair by recognizing various DNA lesions primarily through DNA deformability. Structural studies show that Rad4 (yeast ortholog of XPC) unwinds DNA at the lesion site and flips out two nucleotide pairs. Our results elucidate a broad range of conformations accessible to mismatched DNA even in the absence of the protein. Notably, the most severely distorted conformations share remarkable resemblance to the deformed conformation seen in the crystal structure of the Rad4-bound ``recognition'' complex supporting for the first time a possible ``conformational capture'' mechanism for damage recognition by XPC/Rad4. NSF Univ of Illinois-Chicago.

  15. Extending the language of DNA molecular recognition by polyamides: unexpected influence of imidazole and pyrrole arrangement on binding affinity and specificity.

    PubMed

    Buchmueller, Karen L; Staples, Andrew M; Howard, Cameron M; Horick, Sarah M; Uthe, Peter B; Le, N Minh; Cox, Kari K; Nguyen, Binh; Pacheco, Kimberly A O; Wilson, W David; Lee, Moses

    2005-01-19

    Pyrrole (Py) and imidazole (Im) polyamides can be designed to target specific DNA sequences. The effect that the pyrrole and imidazole arrangement, plus DNA sequence, have on sequence specificity and binding affinity has been investigated using DNA melting (DeltaT(M)), circular dichroism (CD), and surface plasmon resonance (SPR) studies. SPR results obtained from a complete set of triheterocyclic polyamides show a dramatic difference in the affinity of f-ImPyIm for its cognate DNA (K(eq) = 1.9 x 10(8) M(-1)) and f-PyPyIm for its cognate DNA (K(eq) = 5.9 x 10(5) M(-1)), which could not have been anticipated prior to characterization of these compounds. Moreover, f-ImPyIm has a 10-fold greater affinity for CGCG than distamycin A has for its cognate, AATT. To understand this difference, the triamide dimers are divided into two structural groupings: central and terminal pairings. The four possible central pairings show decreasing selectivity and affinity for their respective cognate sequences: -ImPy > -PyPy- > -PyIm- approximately -ImIm-. These results extend the language of current design motifs for polyamide sequence recognition to include the use of "words" for recognizing two adjacent base pairs, rather than "letters" for binding to single base pairs. Thus, polyamides designed to target Watson-Crick base pairs should utilize the strength of -ImPy- and -PyPy- central pairings. The f/Im and f/Py terminal groups yielded no advantage for their respective C/G or T/A base pairs. The exception is with the -ImPy- central pairing, for which f/Im has a 10-fold greater affinity for C/G than f/Py has for T/A.

  16. Super Normal Vector for Human Activity Recognition with Depth Cameras.

    PubMed

    Yang, Xiaodong; Tian, YingLi

    2017-05-01

    The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.

  17. 3D visual mechinism by neural networkings

    NASA Astrophysics Data System (ADS)

    Sugiyama, Shigeki

    2007-04-01

    There are some computer vision systems that are available on a market but those are quite far from a real usage of our daily life in a sense of security guard or in a sense of a usage of recognition of a target object behaviour. Because those surroundings' sensing might need to recognize a detail description of an object, like "the distance to an object" and "an object detail figure" and "its figure of edging", which are not possible to have a clear picture of the mechanisms of them with the present recognition system. So for doing this, here studies on mechanisms of how a pair of human eyes can recognize a distance apart, an object edging, and an object in order to get basic essences of vision mechanisms. And those basic mechanisms of object recognition are simplified and are extended logically for applying to a computer vision system. Some of the results of these studies are introduced on this paper.

  18. Emotional conditioning to masked stimuli and modulation of visuospatial attention.

    PubMed

    Beaver, John D; Mogg, Karin; Bradley, Brendan P

    2005-03-01

    Two studies investigated the effects of conditioning to masked stimuli on visuospatial attention. During the conditioning phase, masked snakes and spiders were paired with a burst of white noise, or paired with an innocuous tone, in the conditioned stimulus (CS)+ and CS- conditions, respectively. Attentional allocation to the CSs was then assessed with a visual probe task, in which the CSs were presented unmasked (Experiment 1) or both unmasked and masked (Experiment 2), together with fear-irrelevant control stimuli (flowers and mushrooms). In Experiment 1, participants preferentially allocated attention to CS+ relative to control stimuli. Experiment 2 suggested that this attentional bias depended on the perceived aversiveness of the unconditioned stimulus and did not require conscious recognition of the CSs during both acquisition and expression. Copyright 2005 APA, all rights reserved.

  19. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  20. Modeling guidance and recognition in categorical search: bridging human and computer object detection.

    PubMed

    Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris

    2013-10-08

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.

  1. Modeling guidance and recognition in categorical search: Bridging human and computer object detection

    PubMed Central

    Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris

    2013-01-01

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460

  2. Polymerase recognition of 2-thio-iso-guanine·5-methyl-4-pyrimidinone (iGs·P)--A new DD/AA base pair.

    PubMed

    Lee, Dong-Kye; Switzer, Christopher

    2016-02-15

    Polymerase specificity is reported for a previously unknown base pair with a non-standard DD/AA hydrogen bonding pattern: 2-thio-iso-guanine·5-methyl-4-pyrimidinone. Our findings suggest that atomic substitution may provide a solution for low fidelity previously associated with enzymatic copying of iso-guanine. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  4. The effect of visual and interaction fidelity on spatial cognition in immersive virtual environments.

    PubMed

    Mania, Katerina; Wooldridge, Dave; Coxon, Matthew; Robinson, Andrew

    2006-01-01

    Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual and interaction fidelity of immersive virtual environments on memory awareness states. A between groups experiment was carried out to explore the effect of rendering quality on location-based recognition memory for objects and associated states of awareness. The experimental space, consisting of two interconnected rooms, was rendered either flat-shaded or using radiosity rendering. The computer graphics simulations were displayed on a stereo head-tracked Head Mounted Display. Participants completed a recognition memory task after exposure to the experimental space and reported one of four states of awareness following object recognition. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection, and also included guesses. Experimental results revealed variations in the distribution of participants' awareness states across conditions while memory performance failed to reveal any. Interestingly, results revealed a higher proportion of recollections associated with mental imagery in the flat-shaded condition. These findings comply with similar effects revealed in two earlier studies summarized here, which demonstrated that the less "naturalistic" interaction interface or interface of low interaction fidelity provoked a higher proportion of recognitions based on visual mental images.

  5. The posterior parietal cortex in recognition memory: a neuropsychological study.

    PubMed

    Haramati, Sharon; Soroker, Nachum; Dudai, Yadin; Levy, Daniel A

    2008-01-01

    Several recent functional neuroimaging studies have reported robust bilateral activation (L>R) in lateral posterior parietal cortex and precuneus during recognition memory retrieval tasks. It has not yet been determined what cognitive processes are represented by those activations. In order to examine whether parietal lobe-based processes are necessary for basic episodic recognition abilities, we tested a group of 17 first-incident CVA patients whose cortical damage included (but was not limited to) extensive unilateral posterior parietal lesions. These patients performed a series of tasks that yielded parietal activations in previous fMRI studies: yes/no recognition judgments on visual words and on colored object pictures and identifiable environmental sounds. We found that patients with left hemisphere lesions were not impaired compared to controls in any of the tasks. Patients with right hemisphere lesions were not significantly impaired in memory for visual words, but were impaired in recognition of object pictures and sounds. Two lesion--behavior analyses--area-based correlations and voxel-based lesion symptom mapping (VLSM)---indicate that these impairments resulted from extra-parietal damage, specifically to frontal and lateral temporal areas. These findings suggest that extensive parietal damage does not impair recognition performance. We suggest that parietal activations recorded during recognition memory tasks might reflect peri-retrieval processes, such as the storage of retrieved memoranda in a working memory buffer for further cognitive processing.

  6. Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition

    DTIC Science & Technology

    2013-01-01

    Discriminative Visual Recognition ∗ Felix X. Yu†, Liangliang Cao§, Rogerio S. Feris§, John R. Smith§, Shih-Fu Chang† † Columbia University § IBM T. J...for Designing Category-Level Attributes for Dis- criminative Visual Recognition [3]. We first provide an overview of the proposed ap- proach in...2013 to 00-00-2013 4. TITLE AND SUBTITLE Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition 5a

  7. An Excel-based tool for evaluating and visualizing geothermobarometry data

    NASA Astrophysics Data System (ADS)

    Hora, John Milan; Kronz, Andreas; Möller-McNett, Stefan; Wörner, Gerhard

    2013-07-01

    Application of geothermobarometry based on equilibrium exchange of chemical components between two mineral phases in natural samples frequently leads to the dilemma of either: (1) relying on relatively few measurements where there is a high likelihood of equilibrium, or (2) using many analysis pairs, where a significant proportion may not be useful and must be filtered out. The second approach leads to the challenges of (1) evaluation of equilibrium for large numbers of analysis pairs, (2) finding patterns in the dataset where multiple populations exist, and (3) visualizing relationships between calculated temperatures and compositional and textural parameters. Given the limitations of currently-used thermobarometry spreadsheets, we redesign them in a way that eliminates tedium by automating data importing, quality control and calculations, while making all results visible in a single view. Rather than using a traditional spreadsheet layout, we array the calculations in a grid. Each color-coded grid node contains the calculated temperature result corresponding to the intersection of two analyses given in the corresponding column and row. We provide Microsoft Excel templates for some commonly-used thermometers, that can be modified for use with any geothermometer or geobarometer involving two phases. Conditional formatting and ability to sort according to any chosen parameter simplifies pattern recognition, while tests for equilibrium can be incorporated into grid calculations. A case study of rhyodacite domes at Parinacota volcano, Chile, indicates a single population of Fe-Ti oxide temperatures, despite Mg-Mn compositional variability. Crystal zoning and differing thermal histories are, however, evident as a bimodal population of plagioclase-amphibole temperatures. Our approach aids in identification of suspect analyses and xenocrysts and visualization of links between temperature and phase composition. This facilitates interpretation of whether heat transfer was accompanied by bulk mass transfer, and to what degree diffusion has homogenized calculated temperature results in hybrid magmas.

  8. Visual Biopsy by Hydrogen Peroxide-Induced Signal Amplification.

    PubMed

    Zhao, Wenjie; Yang, Sheng; Yang, Jinfeng; Li, Jishan; Zheng, Jing; Qing, Zhihe; Yang, Ronghua

    2016-11-01

    Visual biopsy has attracted special interest by surgeons due to its simplicity and practicality; however, the limited sensitivity of the technology makes it difficult to achieve an early diagnosis. To circumvent this problem, herein, we report a visual signal amplification strategy for establishing a marker-recognizable biopsy that enables early cancer diagnosis. In our proposed approach, hydrogen peroxide (H 2 O 2 ) was selected as a potential underlying marker for its compact relationship in cancer progression. For selective recognition of H 2 O 2 in the process of visual biopsy, a benzylbenzeneboronic acid pinacol ester-decorated copolymer, namely, PMPC-Bpe, was synthesized, affording the final formation of the H 2 O 2 -responsive micelles in which amylose was trapped. The presence of H 2 O 2 activates the boronate ester recognition site and induces it releasing abundant indicator amylose, leading to signal amplification. The indicator came across the solution of KI/I 2 added to the sample, and the formative amylose-KI/I 2 complex has a distinct blue color at 574 nm for visual amplification detection. The feasibility of the proposed method is demonstrated by visualizing the H 2 O 2 content of cancer at different stages and three kinds of actual cancerous samples. As far as we know, this is the first paradigm to rationally design a signaling amplification-based molecular recognizable biopsy for visual and sensitive disease identification, which will extend new possibilities for marker-recognition and signal amplification-based biopsy in disease progressing.

  9. Visual scan paths are abnormal in deluded schizophrenics.

    PubMed

    Phillips, M L; David, A S

    1997-01-01

    One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.

  10. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  11. Neural theory for the perception of causal actions.

    PubMed

    Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A

    2012-07-01

    The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.

  12. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  13. Illusory conjunctions in visual short-term memory: Individual differences in corpus callosum connectivity and splitting attention between the two hemifields.

    PubMed

    Qin, Shuo; Ray, Nicholas R; Ramakrishnan, Nithya; Nashiro, Kaoru; O'Connell, Margaret A; Basak, Chandramallika

    2016-11-01

    Overloading the capacity of visual attention can result in mistakenly combining the various features of an object, that is, illusory conjunctions. We hypothesize that if the two hemispheres separately process visual information by splitting attention, connectivity of corpus callosum-a brain structure integrating the two hemispheres-would predict the degree of illusory conjunctions. In the current study, we assessed two types of illusory conjunctions using a memory-scanning paradigm; the features were either presented across the two opposite hemifields or within the same hemifield. Four objects, each with two visual features, were briefly presented together followed by a probe-recognition and a confidence rating for the recognition accuracy. MRI scans were also obtained. Results indicated that successful recollection during probe recognition was better for across hemifields conjunctions compared to within hemifield conjunctions, lending support to the bilateral advantage of the two hemispheres in visual short-term memory. Age-related differences regarding the underlying mechanisms of the bilateral advantage indicated greater reliance on recollection-based processing in young and on familiarity-based processing in old. Moreover, the integrity of the posterior corpus callosum was more predictive of opposite hemifield illusory conjunctions compared to within hemifield illusory conjunctions, even after controlling for age. That is, individuals with lesser posterior corpus callosum connectivity had better recognition for objects when their features were recombined from the opposite hemifields than from the same hemifield. This study is the first to investigate the role of the corpus callosum in splitting attention between versus within hemifields. © 2016 Society for Psychophysiological Research.

  14. The relational luring effect: Retrieval of relational information during associative recognition.

    PubMed

    Popov, Vencislav; Hristova, Penka; Anders, Royce

    2017-05-01

    Here we argue that semantic relations (e.g., works in: nurse-hospital) have abstract independent representations in long-term memory (LTM) and that the same representation is accessed by all exemplars of a specific relation. We present evidence from 2 associative recognition experiments that uncovered a novel relational luring effect (RLE) in recognition memory. Participants studied word pairs, and then discriminated between intact (old) pairs and recombined lures. In the first experiment participants responded more slowly to lures that were relationally similar (table-cloth) to studied pairs (floor-carpet), in contrast to relationally dissimilar lures (pipe-water). Experiment 2 extended the RLE by showing a continuous effect of relational lure strength on recognition times (RTs), false alarms, and hits. It used a continuous pair recognition task, where each recombined lure or target could be preceded by 0, 1, 2, 3 or 4 different exemplars of the same relation. RTs and false alarms increased linearly with the number of different previously seen relationally similar pairs. Moreover, more typical exemplars of a given relation lead to a stronger RLE. Finally, hits for intact pairs also rose with the number of previously studied different relational instances. These results suggest that semantic relations exist as independent representations in LTM and that during associative recognition these representations can be a spurious source of familiarity. We discuss the implications of the RLE for current models of semantic and episodic memory, unitization in associative recognition, analogical reasoning and retrieval, as well as constructive memory research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  16. Visual Half-Field Experiments Are a Good Measure of Cerebral Language Dominance if Used Properly: Evidence from fMRI

    ERIC Educational Resources Information Center

    Hunter, Zoe R.; Brysbaert, Marc

    2008-01-01

    Traditional neuropsychology employs visual half-field (VHF) experiments to assess cerebral language dominance. This approach is based on the assumption that left cerebral dominance for language leads to faster and more accurate recognition of words in the right visual half-field (RVF) than in the left visual half-field (LVF) during tachistoscopic…

  17. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Recognition is Used as One Cue Among Others in Judgment and Decision Making

    ERIC Educational Resources Information Center

    Richter, Tobias; Spath, Pamela

    2006-01-01

    Three experiments with paired comparisons were conducted to test the noncompensatory character of the recognition heuristic (D. G. Goldstein & G. Gigerenzer, 2002) in judgment and decision making. Recognition and knowledge about the recognized alternative were manipulated. In Experiment 1, participants were presented pairs of animal names where…

  19. Functional architecture of visual emotion recognition ability: A latent variable approach.

    PubMed

    Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W

    2016-05-01

    Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes. (c) 2016 APA, all rights reserved).

  20. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    PubMed

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  1. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    ERIC Educational Resources Information Center

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  2. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  3. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  4. The study of infrared target recognition at sea background based on visual attention computational model

    NASA Astrophysics Data System (ADS)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  5. Recognition of T·G mismatched base pairs in DNA by stacked imidazole-containing polyamides: surface plasmon resonance and circular dichroism studies

    PubMed Central

    Lacy, Eilyn R.; Cox, Kari K.; Wilson, W. David; Lee, Moses

    2002-01-01

    An imidazole-containing polyamide trimer, f-ImImIm, where f is a formamido group, was recently found using NMR methods to recognize T·G mismatched base pairs. In order to characterize in detail the T·G recognition affinity and specificity of imidazole-containing polyamides, f-ImIm, f-ImImIm and f-PyImIm were synthesized. The kinetics and thermodynamics for the polyamides binding to Watson–Crick and mismatched (containing one or two T·G, A·G or G·G mismatched base pairs) hairpin oligonucleotides were determined by surface plasmon resonance and circular dichroism (CD) methods. f-ImImIm binds significantly more strongly to the T·G mismatch-containing oligonucleotides than to the sequences with other mismatched or with Watson–Crick base pairs. Compared with the Watson–Crick CCGG sequence, f-ImImIm associates more slowly with DNAs containing T·G mismatches in place of one or two C·G base pairs and, more importantly, the dissociation rate from the T·G oligonucleotides is very slow (small kd). These results clearly demonstrate the binding selectivity and enhanced affinity of side-by-side imidazole/imidazole pairings for T·G mismatches and show that the affinity and specificity increase arise from much lower kd values with the T·G mismatched duplexes. CD titration studies of f-ImImIm complexes with T·G mismatched sequences produce strong induced bands at ∼330 nm with clear isodichroic points, in support of a single minor groove complex. CD DNA bands suggest that the complexes remain in the B conformation. PMID:11937638

  6. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  7. Visual integration enhances associative memory equally for young and older adults without reducing hippocampal encoding activation.

    PubMed

    Memel, Molly; Ryan, Lee

    2017-06-01

    The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was observed during the Combined condition. Older adults showed less overall activation in MTL regions compared to young adults, and associative memory performance was most strongly predicted by prefrontal, rather than MTL, activation. We suggest that visual integration benefits both young and older adults similarly, and provides a special case of unitization that may be mediated by recollective, rather than familiarity-based encoding processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Heteroditopic receptors for ion-pair recognition.

    PubMed

    McConnell, Anna J; Beer, Paul D

    2012-05-21

    Ion-pair recognition is a new field of research emerging from cation and anion coordination chemistry. Specific types of heteroditopic receptor designs for ion pairs and the complexity of ion-pair binding are discussed to illustrate key concepts such as cooperativity. The importance of this area of research is reflected by the wide variety of potential applications of ion-pair receptors, including applications as membrane transport and salt solubilization agents and sensors. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    PubMed

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  10. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis

    PubMed Central

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189

  11. Extensions of the picture superiority effect in associative recognition.

    PubMed

    Hockley, William E; Bancroft, Tyler

    2011-12-01

    Previous research has shown that the picture superiority effect (PSE) is seen in tests of associative recognition for random pairs of line drawings compared to pairs of concrete words (Hockley, 2008). In the present study we demonstrated that the PSE for associative recognition is still observed when subjects have correctly identified the individual items of each pair as old (Experiment 1), and that this effect is not due to rehearsal borrowing (Experiment 2). The PSE for associative recognition also is shown to be present but attenuated for mixed picture-word pairs (Experiment 3), and similar in magnitude for pairs of simple black and white line drawings and coloured photographs of detailed objects (Experiment 4). The results are consistent with the view that the semantic meaning of nameable pictures is activated faster than that of words thereby affording subjects more time to generate and elaborate meaningful associations between items depicted in picture form. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  12. Touch influences perceived gloss

    PubMed Central

    Adams, Wendy J.; Kerrigan, Iona S.; Graf, Erich W.

    2016-01-01

    Identifying an object’s material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction – slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception – a sensible strategy given the ambiguity of visual clues to gloss. PMID:26915492

  13. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  14. The role of visual imagery in the retention of information from sentences.

    PubMed

    Drose, G S; Allen, G L

    1994-01-01

    We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.

  15. Caffeine Improves Left Hemisphere Processing of Positive Words

    PubMed Central

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893

  16. Exploring the association between visual perception abilities and reading of musical notation.

    PubMed

    Lee, Horng-Yih

    2012-06-01

    In the reading of music, the acquisition of pitch information depends primarily upon the spatial position of notes as well as upon an individual's spatial processing ability. This study investigated the relationship between the ability to read single notes and visual-spatial ability. Participants with high and low single-note reading abilities were differentiated based upon differences in musical notation-reading abilities and their spatial processing; object recognition abilities were then assessed. It was found that the group with lower note-reading abilities made more errors than did the group with a higher note-reading abilities in the mental rotation task. In contrast, there was no apparent significant difference between the two groups in the object recognition task. These results suggest that note-reading may be related to visual spatial processing abilities, and not to an individual's ability with object recognition.

  17. Infant Visual Recognition Memory

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2004-01-01

    Visual recognition memory is a robust form of memory that is evident from early infancy, shows pronounced developmental change, and is influenced by many of the same factors that affect adult memory; it is surprisingly resistant to decay and interference. Infant visual recognition memory shows (a) modest reliability, (b) good discriminant…

  18. WebGIVI: a web-based gene enrichment analysis and visualization tool.

    PubMed

    Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J

    2017-05-04

    A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .

  19. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  20. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  1. Investigation into the visual perceptive ability of anaesthetists during ultrasound-guided interscalene and femoral blocks conducted on soft embalmed cadavers: a randomised single-blind study.

    PubMed

    Mustafa, A; Seeley, J; Munirama, S; Columb, M; McKendrick, M; Schwab, A; Corner, G; Eisma, R; Mcleod, G

    2018-04-01

    Errors may occur during regional anaesthesia whilst searching for nerves, needle tips, and test doses. Poor visual search impacts on decision making, clinical intervention, and patient safety. We conducted a randomised single-blind study in a single university hospital. Twenty trainees and two consultants examined the paired B-mode and fused B-mode and elastography video recordings of 24 interscalene and 24 femoral blocks conducted on two soft embalmed cadavers. Perineural injection was randomised equally to 0.25, 0.5, and 1.0 ml volumes. Tissue displacement perceived on both imaging modalities was defined as 'target' or 'distractor'. Our primary objective was to test the anaesthetists' perception of the number and proportion of targets and distractors on B-mode and fused elastography videos collected during femoral and sciatic nerve block on soft embalmed cadavers. Our secondary objectives were to determine the differences between novices and experts, and between test-dose volumes, and to measure the area and brightness of spread and strain patterns. All anaesthetists recognised perineural spread using 0.25 ml volumes. Distractor patterns were recognised in 133 (12%) of B-mode and in 403 (38%) of fused B-mode and elastography patterns; P<0.001. With elastography, novice recognition improved from 12 to 37% (P<0.001), and consultant recognition increased from 24 to 53%; P<0.001. Distractor recognition improved from 8 to 31% using 0.25 ml volumes (P<0.001), and from 15 to 45% using 1 ml volumes (P<0.001). Visual search improved with fusion elastography, increased volume, and consultants. A need exists to investigate image search strategies. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  2. Social learning of predators in the dark: understanding the role of visual, chemical and mechanical information.

    PubMed

    Manassa, R P; McCormick, M I; Chivers, D P; Ferrari, M C O

    2013-08-22

    The ability of prey to observe and learn to recognize potential predators from the behaviour of nearby individuals can dramatically increase survival and, not surprisingly, is widespread across animal taxa. A range of sensory modalities are available for this learning, with visual and chemical cues being well-established modes of transmission in aquatic systems. The use of other sensory cues in mediating social learning in fishes, including mechano-sensory cues, remains unexplored. Here, we examine the role of different sensory cues in social learning of predator recognition, using juvenile damselfish (Amphiprion percula). Specifically, we show that a predator-naive observer can socially learn to recognize a novel predator when paired with a predator-experienced conspecific in total darkness. Furthermore, this study demonstrates that when threatened, individuals release chemical cues (known as disturbance cues) into the water. These cues induce an anti-predator response in nearby individuals; however, they do not facilitate learnt recognition of the predator. As such, another sensory modality, probably mechano-sensory in origin, is responsible for information transfer in the dark. This study highlights the diversity of sensory cues used by coral reef fishes in a social learning context.

  3. Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.

    PubMed

    Strother, Lars; Coros, Alexandra M; Vilis, Tutis

    2016-02-01

    Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.

  4. Recall deficits in stroke patients with thalamic lesions covary with damage to the parvocellular mediodorsal nucleus of the thalamus.

    PubMed

    Pergola, Giulio; Güntürkün, Onur; Koch, Benno; Schwarz, Michael; Daum, Irene; Suchan, Boris

    2012-08-01

    The functional role of the mediodorsal thalamic nucleus (MD) and its cortical network in memory processes is discussed controversially. While Aggleton and Brown (1999) suggested a role for recognition and not recall, Van der Werf et al. (2003) suggested that this nucleus is functionally related to executive function and strategic retrieval, based on its connections to the prefrontal cortices (PFC). The present study used a lesion approach including patients with focal thalamic lesions to examine the functions of the MD, the intralaminar nuclei and the midline nuclei in memory processing. A newly designed pair association task was used, which allowed the assessment of recognition and cued recall performance. Volume loss in thalamic nuclei was estimated as a predictor for alterations in memory performance. Patients performed poorer than healthy controls on recognition accuracy and cued recall. Furthermore, patients responded slower than controls specifically on recognition trials followed by successful cued recall of the paired associate. Reduced recall of picture pairs and increased response times during recognition followed by cued recall covaried with the volume loss in the parvocellular MD. This pattern suggests a role of this thalamic region in recall and thus recollection, which does not fit the framework proposed by Aggleton and Brown (1999). The functional specialization of the parvocellular MD accords with its connectivity to the dorsolateral PFC, highlighting the role of this thalamocortical network in explicit memory (Van der Werf et al., 2003). Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Nucleic acid constructs containing orthogonal site selective recombinases (OSSRs)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilmore, Joshua M.; Anderson, J. Christopher; Dueber, John E.

    The present invention provides for a recombinant nucleic acid comprising a nucleotide sequence comprising a plurality of constructs, wherein each construct independently comprises a nucleotide sequence of interest flanked by a pair of recombinase recognition sequences. Each pair of recombinase recognition sequences is recognized by a distinct recombinase. Optionally, each construct can, independently, further comprise one or more genes encoding a recombinase capable of recognizing the pair of recombinase recognition sequences of the construct. The recombinase can be an orthogonal (non-cross reacting), site-selective recombinase (OSSR).

  6. Memory effects of sleep, emotional valence, arousal and novelty in children.

    PubMed

    Vermeulen, Marije C M; van der Heijden, Kristiaan B; Benjamins, Jeroen S; Swaab, Hanna; van Someren, Eus J W

    2017-06-01

    Effectiveness of memory consolidation is determined by multiple factors, including sleep after learning, emotional valence, arousal and novelty. Few studies investigated how the effect of sleep compares with (and interacts with) these other factors, of which virtually none are in children. The present study did so by repeated assessment of declarative memory in 386 children (45% boys) aged 9-11 years through an online word-pair task. Children were randomly assigned to either a morning or evening learning session of 30 unrelated word-pairs with positive, neutral or negative valenced cues and neutral targets. After immediately assessing baseline recognition, delayed recognition was recorded either 12 or 24 h later, resulting in four different assessment schedules. One week later, the procedure was repeated with exactly the same word-pairs to evaluate whether effects differed for relearning versus original novel learning. Mixed-effect logistic regression models were used to evaluate how the probability of correct recognition was affected by sleep, valence, arousal, novelty and their interactions. Both immediate and delayed recognition were worse for pairs with negatively valenced or less arousing cue words. Relearning improved immediate and delayed word-pair recognition. In contrast to these effects, sleep did not affect recognition, nor did sleep moderate the effects of arousal, valence and novelty. The findings suggest a robust inclination of children to specifically forget the pairing of words to negatively valenced cue words. In agreement with a recent meta-analysis, children seem to depend less on sleep for the consolidation of information than has been reported for adults, irrespective of the emotional valence, arousal and novelty of word-pairs. © 2017 European Sleep Research Society.

  7. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  8. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  9. Learning and Recognition of a Non-conscious Sequence of Events in Human Primary Visual Cortex.

    PubMed

    Rosenthal, Clive R; Andrews, Samantha K; Antoniades, Chrystalina A; Kennard, Christopher; Soto, David

    2016-03-21

    Human primary visual cortex (V1) has long been associated with learning simple low-level visual discriminations [1] and is classically considered outside of neural systems that support high-level cognitive behavior in contexts that differ from the original conditions of learning, such as recognition memory [2, 3]. Here, we used a novel fMRI-based dichoptic masking protocol-designed to induce activity in V1, without modulation from visual awareness-to test whether human V1 is implicated in human observers rapidly learning and then later (15-20 min) recognizing a non-conscious and complex (second-order) visuospatial sequence. Learning was associated with a change in V1 activity, as part of a temporo-occipital and basal ganglia network, which is at variance with the cortico-cerebellar network identified in prior studies of "implicit" sequence learning that involved motor responses and visible stimuli (e.g., [4]). Recognition memory was associated with V1 activity, as part of a temporo-occipital network involving the hippocampus, under conditions that were not imputable to mechanisms associated with conscious retrieval. Notably, the V1 responses during learning and recognition separately predicted non-conscious recognition memory, and functional coupling between V1 and the hippocampus was enhanced for old retrieval cues. The results provide a basis for novel hypotheses about the signals that can drive recognition memory, because these data (1) identify human V1 with a memory network that can code complex associative serial visuospatial information and support later non-conscious recognition memory-guided behavior (cf. [5]) and (2) align with mouse models of experience-dependent V1 plasticity in learning and memory [6]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Does N200 reflect semantic processing?--An ERP study on Chinese visual word recognition.

    PubMed

    Du, Yingchun; Zhang, Qin; Zhang, John X

    2014-01-01

    Recent event-related potential research has reported a N200 response or a negative deflection peaking around 200 ms following the visual presentation of two-character Chinese words. This N200 shows amplitude enhancement upon immediate repetition and there has been preliminary evidence that it reflects orthographic processing but not semantic processing. The present study tested whether this N200 is indeed unrelated to semantic processing with more sensitive measures, including the use of two tasks engaging semantic processing either implicitly or explicitly and the adoption of a within-trial priming paradigm. In Exp. 1, participants viewed repeated, semantically related and unrelated prime-target word pairs as they performed a lexical decision task judging whether or not each target was a real word. In Exp. 2, participants viewed high-related, low-related and unrelated word pairs as they performed a semantic task judging whether each word pair was related in meaning. In both tasks, semantic priming was found from both the behavioral data and the N400 ERP responses. Critically, while repetition priming elicited a clear and large enhancement on the N200 response, semantic priming did not show any modulation effect on the same response. The results indicate that the N200 repetition enhancement effect cannot be explained with semantic priming and that this specific N200 response is unlikely to reflect semantic processing.

  11. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    PubMed

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  12. Infrared vehicle recognition using unsupervised feature learning based on K-feature

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Tan, Yihua; Xia, Haijiao; Tian, Jinwen

    2018-02-01

    Subject to the complex battlefield environment, it is difficult to establish a complete knowledge base in practical application of vehicle recognition algorithms. The infrared vehicle recognition is always difficult and challenging, which plays an important role in remote sensing. In this paper we propose a new unsupervised feature learning method based on K-feature to recognize vehicle in infrared images. First, we use the target detection algorithm which is based on the saliency to detect the initial image. Then, the unsupervised feature learning based on K-feature, which is generated by Kmeans clustering algorithm that extracted features by learning a visual dictionary from a large number of samples without label, is calculated to suppress the false alarm and improve the accuracy. Finally, the vehicle target recognition image is finished by some post-processing. Large numbers of experiments demonstrate that the proposed method has satisfy recognition effectiveness and robustness for vehicle recognition in infrared images under complex backgrounds, and it also improve the reliability of it.

  13. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  14. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  15. Pairing vegetables with a liked food and visually appealing presentation: promising strategies for increasing vegetable consumption among preschoolers.

    PubMed

    Correia, Danielle C S; O'Connell, Meghan; Irwin, Melinda L; Henderson, Kathryn E

    2014-02-01

    Vegetable consumption among preschool children is below recommended levels. New evidence-based approaches to increase preschoolers' vegetable intake, particularly in the child care setting, are needed. This study tests the effectiveness of two community-based randomized interventions to increase vegetable consumption and willingness to try vegetables: (1) the pairing of a vegetable with a familiar, well-liked food and (2) enhancing the visual appeal of a vegetable. Fifty-seven preschoolers enrolled in a Child and Adult Care Food Program-participating child care center participated in the study; complete lunch and snack data were collected from 43 and 42 children, respectively. A within-subjects, randomized design was used, with order of condition counterbalanced. For lunch, steamed broccoli was served either on the side of or on top of cheese pizza. For a snack, raw cucumber was served either as semicircles with chive and an olive garnish or arranged in a visually appealing manner (in the shape of a caterpillar). Paired t-tests were used to determine differences in consumption of meal components, and McNemar's test was performed to compare willingness to taste. Neither visual appeal enhancement nor pairing with a liked food increased vegetable consumption. Pairing increased willingness to try the vegetable from 79% to 95% of children (p=0.07). Greater vegetable intake occurred at snack than at lunch. Further research should explore the strategy of pairing vegetables with liked foods. Greater consumption at snack underscores snack time as a critical opportunity for increasing preschool children's vegetable intake.

  16. Named Entity Recognition in a Hungarian NL Based QA System

    NASA Astrophysics Data System (ADS)

    Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor

    In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.

  17. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  18. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.

    PubMed

    Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo

    2015-06-01

    The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.

  19. An unusual mode of DNA duplex association: Watson-Crick interaction of all-purine deoxyribonucleic acids.

    PubMed

    Battersby, Thomas R; Albalos, Maria; Friesenhahn, Michel J

    2007-05-01

    Nucleic acid duplexes associating through purine-purine base pairing have been constructed and characterized in a remarkable demonstration of nucleic acids with mixed sequence and a natural backbone in an alternative duplex structure. The antiparallel deoxyribose all-purine duplexes associate specifically through Watson-Crick pairing, violating the nucleobase size-complementarity pairing convention found in Nature. Sequence-specific recognition displayed by these structures makes the duplexes suitable, in principle, for information storage and replication fundamental to molecular evolution in all living organisms. All-purine duplexes can be formed through association of purines found in natural ribonucleosides. Key to the formation of these duplexes is the N(3)-H tautomer of isoguanine, preferred in the duplex, but not in aqueous solution. The duplexes have relevance to evolution of the modern genetic code and can be used for molecular recognition of natural nucleic acids.

  20. A robust pointer segmentation in biomedical images toward building a visual ontology for biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.

  1. Emotion Recognition and Visual-Scan Paths in Fragile X Syndrome

    ERIC Educational Resources Information Center

    Shaw, Tracey A.; Porter, Melanie A.

    2013-01-01

    This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored.…

  2. Comparing the visual spans for faces and letters

    PubMed Central

    He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.

    2015-01-01

    The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858

  3. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  4. Neural network-based systems for handprint OCR applications.

    PubMed

    Ganis, M D; Wilson, C L; Blue, J L

    1998-01-01

    Over the last five years or so, neural network (NN)-based approaches have been steadily gaining performance and popularity for a wide range of optical character recognition (OCR) problems, from isolated digit recognition to handprint recognition. We present an NN classification scheme based on an enhanced multilayer perceptron (MLP) and describe an end-to-end system for form-based handprint OCR applications designed by the National Institute of Standards and Technology (NIST) Visual Image Processing Group. The enhancements to the MLP are based on (i) neuron activations functions that reduce the occurrences of singular Jacobians; (ii) successive regularization to constrain the volume of the weight space; and (iii) Boltzmann pruning to constrain the dimension of the weight space. Performance characterization studies of NN systems evaluated at the first OCR systems conference and the NIST form-based handprint recognition system are also summarized.

  5. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  6. Sequence-dependent DNA deformability studied using molecular dynamics simulations.

    PubMed

    Fujii, Satoshi; Kono, Hidetoshi; Takenaka, Shigeori; Go, Nobuhiro; Sarai, Akinori

    2007-01-01

    Proteins recognize specific DNA sequences not only through direct contact between amino acids and bases, but also indirectly based on the sequence-dependent conformation and deformability of the DNA (indirect readout). We used molecular dynamics simulations to analyze the sequence-dependent DNA conformations of all 136 possible tetrameric sequences sandwiched between CGCG sequences. The deformability of dimeric steps obtained by the simulations is consistent with that by the crystal structures. The simulation results further showed that the conformation and deformability of the tetramers can highly depend on the flanking base pairs. The conformations of xATx tetramers show the most rigidity and are not affected by the flanking base pairs and the xYRx show by contrast the greatest flexibility and change their conformations depending on the base pairs at both ends, suggesting tetramers with the same central dimer can show different deformabilities. These results suggest that analysis of dimeric steps alone may overlook some conformational features of DNA and provide insight into the mechanism of indirect readout during protein-DNA recognition. Moreover, the sequence dependence of DNA conformation and deformability may be used to estimate the contribution of indirect readout to the specificity of protein-DNA recognition as well as nucleosome positioning and large-scale behavior of nucleic acids.

  7. Memory and disgust: Effects of appearance-congruent and appearance-incongruent information on source memory for food.

    PubMed

    Mieth, Laura; Bell, Raoul; Buchner, Axel

    2016-01-01

    The present study was stimulated by previous findings showing that people preferentially remember person descriptions that violate appearance-based first impressions. Given that until now all studies used faces as stimuli, these findings can be explained by referring to a content-specific module for social information processing that facilitates social orientation within groups via stereotyping and counter-stereotyping. The present study tests whether the same results can be obtained with fitness-relevant stimuli from another domain--pictures of disgusting-looking or tasty-looking food, paired with tasty and disgusting descriptions. A multinomial model was used to disentangle item memory, guessing and source memory. There was an old-new recognition advantage for disgusting-looking food. People had a strong tendency towards guessing that disgusting-looking food had been previously associated with a disgusting description. Source memory was enhanced for descriptions that disconfirmed these negative, appearance-based impressions. These findings parallel the results from the social domain. Heuristic processing of stimuli based on visual appearance may be complemented by intensified processing of incongruent information that invalidates these first impressions.

  8. Neural Correlates of Intersensory Processing in Five-Month-Old Infants

    PubMed Central

    Reynolds, Greg D.; Bahrick, Lorraine E.; Lickliter, Robert; Guy, Maggie W.

    2014-01-01

    Two experiments assessing event-related potentials in 5-month-old infants were conducted to examine neural correlates of attentional salience and efficiency of processing of a visual event (woman speaking) paired with redundant (synchronous) speech, nonredundant (asynchronous) speech, or no speech. In Experiment 1, the Nc component associated with attentional salience was greater in amplitude following synchronous audiovisual as compared with asynchronous audiovisual and unimodal visual presentations. A block design was utilized in Experiment 2 to examine efficiency of processing of a visual event. Only infants exposed to synchronous audiovisual speech demonstrated a significant reduction in amplitude of the late slow wave associated with successful stimulus processing and recognition memory from early to late blocks of trials. These findings indicate that events that provide intersensory redundancy are associated with enhanced neural responsiveness indicative of greater attentional salience and more efficient stimulus processing as compared with the same events when they provide no intersensory redundancy in 5-month-old infants. PMID:23423948

  9. The Inversion Effect for Chinese Characters is Modulated by Radical Organization.

    PubMed

    Luo, Canhuang; Chen, Wei; Zhang, Ye

    2017-06-01

    In studies of visual object recognition, strong inversion effects accompany the acquisition of expertise and imply the involvement of configural processing. Chinese literacy results in sensitivity to the orthography of Chinese characters. While there is some evidence that this orthographic sensitivity results in an inversion effect, and thus involves configural processing, that processing might depend on exact orthographic properties. Chinese character recognition is believed to involve a hierarchical process, involving at least two lower levels of representation: strokes and radicals. Radicals are grouped into characters according to certain types of structure, i.e. left-right structure, top-bottom structure, or simple characters with only one radical by itself. These types of radical structures vary in both familiarity, and in hierarchical level (compound versus simple characters). In this study, we investigate whether the hierarchical-level or familiarity of radical-structure has an impact on the magnitude of the inversion effect. Participants were asked to do a matching task on pairs of either upright or inverted characters with all the types of structure. Inversion effects were measured based on both reaction time and response sensitivity. While an inversion effect was observed in all 3 conditions, the magnitude of the inversion effect varied with radical structure, being significantly larger for the most familiar type of structure: characters consisting of 2 radicals organized from left to right. These findings indicate that character recognition involves extraction of configural structure as well as radical processing which play different roles in the processing of compound characters and simple characters.

  10. Self-supervised online metric learning with low rank constraint for scene categorization.

    PubMed

    Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo

    2013-08-01

    Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.

  11. Continued effects of context reinstatement in recognition.

    PubMed

    Hanczakowski, Maciej; Zawadzka, Katarzyna; Macken, Bill

    2015-07-01

    The context reinstatement effect refers to the enhanced memory performance found when the context information paired with a target item at study is re-presented at test. Here we investigated the consequences of the way that context information is processed in such a setting that gives rise to its beneficial effect on item recognition memory. Specifically, we assessed whether reinstating context in a recognition test facilitates subsequent memory for this context, beyond the facilitation conferred by presentation of the same context with a different study item. Reinstating the study context at test led to better accuracy in two-alternative forced choice recognition for target faces than did re-pairing those faces with another context encountered during the study phase. The advantage for reinstated over re-paired conditions occurred for both within-subjects (Exp. 1) and between-subjects (Exp. 2) manipulations. Critically, in a subsequent recognition test for the contexts themselves, contexts that had previously served in the reinstated condition were recognized better than contexts that had previously served in the re-paired context condition. This constitutes the first demonstration of continuous effects of context reinstatement on memory for context.

  12. An optimized content-aware image retargeting method: toward expanding the perceived visual field of the high-density retinal prosthesis recipients

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu

    2018-04-01

    Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.

  13. Associated impairment of the categories of conspecifics and biological entities: cognitive and neuroanatomical aspects of a new case.

    PubMed

    Capitani, Erminio; Chieppa, Francesca; Laiacona, Marcella

    2010-05-01

    Case A.C.A. presented an associated impairment of visual recognition and semantic knowledge for celebrities and biological objects. This case was relevant for (a) the neuroanatomical correlations, and (b) the relationship between visual recognition and semantics within the biological domain and the conspecifics domain. A.C.A. was not affected by anterior temporal damage. Her bilateral vascular lesions were localized on the medial and inferior temporal gyrus on the right and on the intermediate fusiform gyrus on the left, without concomitant lesions of the parahippocampal gyrus or posterior fusiform. Data analysis was based on a novel methodology developed to estimate the rate of stored items in the visual structural description system (SDS) or in the face recognition unit. For each biological object, no particular correlation was found between the visual information accessed through the semantic system and that tapped by the picture reality judgement. Findings are discussed with reference to whether a putative resource commonality is likely between biological objects and conspecifics, and whether or not either category may depend on an exclusive neural substrate.

  14. Fast cat-eye effect target recognition based on saliency extraction

    NASA Astrophysics Data System (ADS)

    Li, Li; Ren, Jianlin; Wang, Xingbin

    2015-09-01

    Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.

  15. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  16. Supramolecular latching system based on ultrastable synthetic binding pairs as versatile tools for protein imaging.

    PubMed

    Kim, Kyung Lock; Sung, Gihyun; Sim, Jaehwan; Murray, James; Li, Meng; Lee, Ara; Shrinidhi, Annadka; Park, Kyeng Min; Kim, Kimoon

    2018-04-27

    Here we report ultrastable synthetic binding pairs between cucurbit[7]uril (CB[7]) and adamantyl- (AdA) or ferrocenyl-ammonium (FcA) as a supramolecular latching system for protein imaging, overcoming the limitations of protein-based binding pairs. Cyanine 3-conjugated CB[7] (Cy3-CB[7]) can visualize AdA- or FcA-labeled proteins to provide clear fluorescence images for accurate and precise analysis of proteins. Furthermore, controllability of the system is demonstrated by treating with a stronger competitor guest. At low temperature, this allows us to selectively detach Cy3-CB[7] from guest-labeled proteins on the cell surface, while leaving Cy3-CB[7] latched to the cytosolic proteins for spatially conditional visualization of target proteins. This work represents a non-protein-based bioimaging tool which has inherent advantages over the widely used protein-based techniques, thereby demonstrating the great potential of this synthetic system.

  17. Mutation of the PAX6 gene in a sporadic patient with atypical aniridia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, D.; Li, Y.; Traboulsi, E.I.

    1994-09-01

    A 28 year-old man presented with poor vision since childhood and gradual further decline of several years duration. His visual acuity measures 20/200 OD with -11.50 + 0.50 x 150 and 20/100 OS with -12.25 + 0.25 x 35. He had a fine nystagmus. His visual fields were full. There was a circumferential pannus with areas of corneal stromal opacification. The iris was hypoplastic with atypical colobomatous defects. The lenses had scattered cortical opacities. The intraocular pressures were normal. The optic nerves had cup disk ratios of 0.6 OU. The family history was negative for similar defects. A diagnosis ofmore » aniridia was made and blood was drawn for analysis of the PAX6 gene. PCR amplification of exon 5 showed heterozygous fragments with one allele being larger than normal. Direct DNA sequencing of the individual heterozygous allele showed a 41 base pair insertion at nucleotide 483 in exon 5 of the paired domain. This frameshift mutation changed codon 71 to a stop codon. The diagnosis of aniridia was confirmed in this atypical patient, who will need to be monitored for his high risk of glaucoma. The risk of developing Wilms` tumor in patients with mutations within the aniridia gene is presumably negligible since the neighboring Wilms` tumor gene is unaffected. The identification of intragenic mutations of the PAX6 gene in patients with sporadic aniridia modifies the management of such patients because of recognition of the increased risk of glaucoma and by reducing the necessity for frequent monitoring for the presence of Wilms` tumor.« less

  18. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  19. An Electrophysiological Signature of Summed Similarity in Visual Working Memory

    ERIC Educational Resources Information Center

    van Vugt, Marieke K.; Sekuler, Robert; Wilson, Hugh R.; Kahana, Michael J.

    2013-01-01

    Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth…

  20. Complementary Hemispheric Asymmetries in Object Naming and Recognition: A Voxel-Based Correlational Study

    ERIC Educational Resources Information Center

    Acres, K.; Taylor, K. I.; Moss, H. E.; Stamatakis, E. A.; Tyler, L. K.

    2009-01-01

    Cognitive neuroscientific research proposes complementary hemispheric asymmetries in naming and recognising visual objects, with a left temporal lobe advantage for object naming and a right temporal lobe advantage for object recognition. Specifically, it has been proposed that the left inferior temporal lobe plays a mediational role linking…

  1. Effects of cholinergic deafferentation of the rhinal cortex on visual recognition memory in monkeys.

    PubMed

    Turchi, Janita; Saunders, Richard C; Mishkin, Mortimer

    2005-02-08

    Excitotoxic lesion studies have confirmed that the rhinal cortex is essential for visual recognition ability in monkeys. To evaluate the mnemonic role of cholinergic inputs to this cortical region, we compared the visual recognition performance of monkeys given rhinal cortex infusions of a selective cholinergic immunotoxin, ME20.4-SAP, with the performance of monkeys given control infusions into this same tissue. The immunotoxin, which leads to selective cholinergic deafferentation of the infused cortex, yielded recognition deficits of the same magnitude as those produced by excitotoxic lesions of this region, providing the most direct demonstration to date that cholinergic activation of the rhinal cortex is essential for storing the representations of new visual stimuli and thereby enabling their later recognition.

  2. A conflict-based model of color categorical perception: evidence from a priming study.

    PubMed

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  3. The cognitive processing of film and musical soundtracks.

    PubMed

    Boltz, Marilyn G

    2004-10-01

    Previous research has demonstrated that musical soundtracks can influence the interpretation, emotional impact, and remembering of film information. The intent here was to examine how music is encoded into the cognitive system and subsequently represented relative to its accompanying visual action. In Experiment 1, participants viewed a set of music/film clips that were either congruent or incongruent in their emotional affects. Selective attending was also systematically manipulated by instructing viewers to attend to and remember the music, film, or both in tandem. The results from tune recognition, film recall, and paired discrimination tasks collectively revealed that mood-congruent pairs lead to a joint encoding of music/film information as well as an integrated memory code. Incongruent pairs, on the other hand, result in an independent encoding in which a given dimension, music or film, is only remembered well if it was selectively attended to at the time of encoding. Experiment 2 extended these findings by showing that tunes from mood-congruent pairs are better recognized when cued by their original scenes, while those from incongruent pairs are better remembered in the absence of scene information. These findings both support and extend the "Congruence Associationist Model" (A. J. Cohen, 2001), which addresses those cognitive mechanisms involved in the processing of music/film information.

  4. Classification of pseudo pairs between nucleotide bases and amino acids by analysis of nucleotide-protein complexes.

    PubMed

    Kondo, Jiro; Westhof, Eric

    2011-10-01

    Nucleotide bases are recognized by amino acid residues in a variety of DNA/RNA binding and nucleotide binding proteins. In this study, a total of 446 crystal structures of nucleotide-protein complexes are analyzed manually and pseudo pairs together with single and bifurcated hydrogen bonds observed between bases and amino acids are classified and annotated. Only 5 of the 20 usual amino acid residues, Asn, Gln, Asp, Glu and Arg, are able to orient in a coplanar fashion in order to form pseudo pairs with nucleotide bases through two hydrogen bonds. The peptide backbone can also form pseudo pairs with nucleotide bases and presents a strong bias for binding to the adenine base. The Watson-Crick side of the nucleotide bases is the major interaction edge participating in such pseudo pairs. Pseudo pairs between the Watson-Crick edge of guanine and Asp are frequently observed. The Hoogsteen edge of the purine bases is a good discriminatory element in recognition of nucleotide bases by protein side chains through the pseudo pairing: the Hoogsteen edge of adenine is recognized by various amino acids while the Hoogsteen edge of guanine is only recognized by Arg. The sugar edge is rarely recognized by either the side-chain or peptide backbone of amino acid residues.

  5. Classification of pseudo pairs between nucleotide bases and amino acids by analysis of nucleotide–protein complexes

    PubMed Central

    Kondo, Jiro; Westhof, Eric

    2011-01-01

    Nucleotide bases are recognized by amino acid residues in a variety of DNA/RNA binding and nucleotide binding proteins. In this study, a total of 446 crystal structures of nucleotide–protein complexes are analyzed manually and pseudo pairs together with single and bifurcated hydrogen bonds observed between bases and amino acids are classified and annotated. Only 5 of the 20 usual amino acid residues, Asn, Gln, Asp, Glu and Arg, are able to orient in a coplanar fashion in order to form pseudo pairs with nucleotide bases through two hydrogen bonds. The peptide backbone can also form pseudo pairs with nucleotide bases and presents a strong bias for binding to the adenine base. The Watson–Crick side of the nucleotide bases is the major interaction edge participating in such pseudo pairs. Pseudo pairs between the Watson–Crick edge of guanine and Asp are frequently observed. The Hoogsteen edge of the purine bases is a good discriminatory element in recognition of nucleotide bases by protein side chains through the pseudo pairing: the Hoogsteen edge of adenine is recognized by various amino acids while the Hoogsteen edge of guanine is only recognized by Arg. The sugar edge is rarely recognized by either the side-chain or peptide backbone of amino acid residues. PMID:21737431

  6. Reduced adaptability, but no fundamental disruption, of norm-based face coding following early visual deprivation from congenital cataracts.

    PubMed

    Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne

    2017-05-01

    Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.

  7. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  8. Fifty years of progress in speech and speaker recognition

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    2004-10-01

    Speech and speaker recognition technology has made very significant progress in the past 50 years. The progress can be summarized by the following changes: (1) from template matching to corpus-base statistical modeling, e.g., HMM and n-grams, (2) from filter bank/spectral resonance to Cepstral features (Cepstrum + DCepstrum + DDCepstrum), (3) from heuristic time-normalization to DTW/DP matching, (4) from gdistanceh-based to likelihood-based methods, (5) from maximum likelihood to discriminative approach, e.g., MCE/GPD and MMI, (6) from isolated word to continuous speech recognition, (7) from small vocabulary to large vocabulary recognition, (8) from context-independent units to context-dependent units for recognition, (9) from clean speech to noisy/telephone speech recognition, (10) from single speaker to speaker-independent/adaptive recognition, (11) from monologue to dialogue/conversation recognition, (12) from read speech to spontaneous speech recognition, (13) from recognition to understanding, (14) from single-modality (audio signal only) to multi-modal (audio/visual) speech recognition, (15) from hardware recognizer to software recognizer, and (16) from no commercial application to many practical commercial applications. Most of these advances have taken place in both the fields of speech recognition and speaker recognition. The majority of technological changes have been directed toward the purpose of increasing robustness of recognition, including many other additional important techniques not noted above.

  9. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    PubMed Central

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  10. Stages of functional processing and the bihemispheric recognition of Japanese Kana script.

    PubMed

    Yoshizaki, K

    2000-04-01

    Two experiments were carried out in order to examine the effects of functional steps on the benefits of interhemispheric integration. The purpose of Experiment 1 was to investigate the validity of the Banich (1995a) model, where the benefits of interhemispheric processing increase as the task involves more functional steps. The 16 right-handed subjects were given two types of Hiragana-Katakana script matching tasks. One was the Name Identity (NI) task, and the other was the vowel matching (VM) task, which involved more functional steps compared to the NI task. The VM task required subjects to make a decision whether or not a pair of Katakana-Hiragana scripts had a common vowel. In both tasks, a pair of Kana scripts (Katakana-Hiragana scripts) was tachistoscopically presented in the unilateral visual fields or the bilateral visual fields, where each letter was presented in each visual field. A bilateral visual fields advantage (BFA) was found in both tasks, and the size of this did not differ between the tasks, suggesting that these findings did not support the Banich model. The purpose of Experiment 2 was to examine the effects of imbalanced processing load between the hemispheres on the benefits of interhemispheric integration. In order to manipulate the balance of processing load across the hemispheres, the revised vowel matching (r-VM) task was developed by amending the VM task. The r-VM task was the same as the VM task in Experiment 1, except that a script that has only vowel sound was presented as a counterpart of a pair of Kana scripts. The 24 right-handed subjects were given the r-VM and NI tasks. The results showed that although a BFA showed up in the NI task, it did not in the r-VM task. These results suggested that the balance of processing load between hemispheres would have an influence on the bilateral hemispheric processing.

  11. A top-down manner-based DCNN architecture for semantic image segmentation.

    PubMed

    Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin

    2017-01-01

    Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

  12. Target-context unitization effect on the familiarity-related FN400: a face recognition exclusion task.

    PubMed

    Guillaume, Fabrice; Etienne, Yann

    2015-03-01

    Using two exclusion tasks, the present study examined how the ERP correlates of face recognition are affected by the nature of the information to be retrieved. Intrinsic (facial expression) and extrinsic (background scene) visual information were paired with face identity and constituted the exclusion criterion at test time. Although perceptual information had to be taken into account in both situations, the FN400 old-new effect was observed only for old target faces on the expression-exclusion task, whereas it was found for both old target and old non-target faces in the background-exclusion situation. These results reveal that the FN400, which is generally interpreted as a correlate of familiarity, was modulated by the retrieval of intra-item and intrinsic face information, but not by the retrieval of extrinsic information. The observed effects on the FN400 depended on the nature of the information to be retrieved and its relationship (unitization) to the recognition target. On the other hand, the parietal old-new effect (generally described as an ERP correlate of recollection) reflected the retrieval of both types of contextual features equivalently. The current findings are discussed in relation to recent controversies about the nature of the recognition processes reflected by the ERP correlates of face recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Towards discrete wavelet transform-based human activity recognition

    NASA Astrophysics Data System (ADS)

    Khare, Manish; Jeon, Moongu

    2017-06-01

    Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods.

  14. Review on the Celestial Sphere Positioning of FITS Format Image Based on WCS and Research on General Visualization

    NASA Astrophysics Data System (ADS)

    Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.

    2017-11-01

    Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.

  15. The Nature of Expertise in Fingerprint Matching: Experts Can Do a Lot with a Little

    PubMed Central

    Thompson, Matthew B.; Tangen, Jason M.

    2014-01-01

    Expert decision making often seems impressive, even miraculous. People with genuine expertise in a particular domain can perform quickly and accurately, and with little information. In the series of experiments presented here, we manipulate the amount of “information” available to a group of experts whose job it is to identify the source of crime scene fingerprints. In Experiment 1, we reduced the amount of information available to experts by inverting fingerprint pairs and adding visual noise. There was no evidence for an inversion effect—experts were just as accurate for inverted prints as they were for upright prints—but expert performance with artificially noisy prints was impressive. In Experiment 2, we separated matching and nonmatching print pairs in time. Experts were conservative, but they were still able to discriminate pairs of fingerprints that were separated by five-seconds, even though the task was quite different from their everyday experience. In Experiment 3, we separated the print pairs further in time to test the long-term memory of experts compared to novices. Long-term recognition memory for experts and novices was the same, with both performing around chance. In Experiment 4, we presented pairs of fingerprints quickly to experts and novices in a matching task. Experts were more accurate than novices, particularly for similar nonmatching pairs, and experts were generally more accurate when they had more time. It is clear that experts can match prints accurately when there is reduced visual information, reduced opportunity for direct comparison, and reduced time to engage in deliberate reasoning. These findings suggest that non-analytic processing accounts for a substantial portion of the variance in expert fingerprint matching accuracy. Our conclusion is at odds with general wisdom in fingerprint identification practice and formal training, and at odds with the claims and explanations that are offered in court during expert testimony. PMID:25517509

  16. The Anatomy of Non-conscious Recognition Memory.

    PubMed

    Rosenthal, Clive R; Soto, David

    2016-11-01

    Cortical regions as early as primary visual cortex have been implicated in recognition memory. Here, we outline the challenges that this presents for neurobiological accounts of recognition memory. We conclude that understanding the role of early visual cortex (EVC) in this process will require the use of protocols that mask stimuli from visual awareness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  18. Association of auditory-verbal and visual hallucinations with impaired and improved recognition of colored pictures.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana

    2015-09-01

    A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).

  19. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  20. Switching on fluorescence for selective visual recognition of naringenin and morin with a metal-organic coordination polymer of Zn(bix) [bix = 1,4-bis(imidazol-1-ylmethyl)benzene

    NASA Astrophysics Data System (ADS)

    Zhao, Xi Juan; Wang, Hui Juan; Liang, Li Jiao; Li, Yuan Fang

    2013-02-01

    Flavonoids such as naringenin and morin are ubiquitous in a wide range of foods isolated from plants, and have diverse effects on plants even on human health. Here, we establish a selective visual method for recognition of aringenin and morin based on the "switched on" fluorescence induced by a metal-organic coordination polymer of Zn(bix) [bix = 1,4-bis(imidazol-1-ylmethyl)benzene]. Owing to the coordination interaction of aringenin and morin with Zn(II) from the polymeric structure of Zn(bix), the conformational free rotation of naringenin and morin is restricted leading to relatively rigid structures. And as a consequence, the fluorescence is switched on. While luteolin and quercetin, holding a very similar structure with naringenin and morin, have no such fluorescence enhancement most likely owing to the 3'-hydroxy substitution in the B ring. Under 365 nm UV lamp light, we can visually recognize and discriminate naringenin and morin from them each other and luteolin as well as quercetin based on the colors of their emission. With this recognition system, the detection of naringenin and morin in human urine was made with satisfactory results.

  1. Visual processing of moving and static self body-parts.

    PubMed

    Frassinetti, Francesca; Pavani, Francesco; Zamagni, Elisa; Fusaroli, Giulia; Vescovi, Massimo; Benassi, Mariagrazia; Avanzi, Stefano; Farnè, Alessandro

    2009-07-01

    Humans' ability to recognize static images of self body-parts can be lost following a lesion of the right hemisphere [Frassinetti, F., Maini, M., Romualdi, S., Galante, E., & Avanzi, S. (2008). Is it mine? Hemispheric asymmetries in corporeal self-recognition. Journal of Cognitive Neuroscience, 20, 1507-1516]. Here we investigated whether the visual information provided by the movement of self body-parts may be separately processed by right brain-damaged (RBD) patients and constitute a valuable cue to reduce their deficit in self body-parts processing. To pursue these aims, neurological healthy subjects and RBD patients were submitted to a matching-task of a pair of subsequent visual stimuli, in two conditions. In the dynamic condition, participants were shown movies of moving body-parts (hand, foot, arm and leg); in the static condition, participants were shown still images of the same body-parts. In each condition, on half of the trials at least one stimulus in the pair was from the participant's own body ('Self' condition), whereas on the remaining half of the trials both stimuli were from another person ('Other' condition). Results showed that in healthy participants the self-advantage was present when processing both static and dynamic body-parts, but it was more important in the latter condition. In RBD patients, however, the self-advantage was absent in the static, but present in the dynamic body-parts condition. These findings suggest that visual information from self body-parts in motion may be processed independently in patients with impaired static self-processing, thus pointing to a modular organization of the mechanisms responsible for the self/other distinction.

  2. The effect of mood-context on visual recognition and recall memory.

    PubMed

    Robinson, Sarita J; Rollings, Lucy J L

    2011-01-01

    Although it is widely known that memory is enhanced when encoding and retrieval occur in the same state, the impact of elevated stress/arousal is less understood. This study explores mood-dependent memory's effects on visual recognition and recall of material memorized either in a neutral mood or under higher stress/arousal levels. Participants' (N = 60) recognition and recall were assessed while they experienced either the same o a mismatched mood at retrieval. The results suggested that both visual recognition and recall memory were higher when participants experienced the same mood at encoding and retrieval compared with those who experienced a mismatch in mood context between encoding and retrieval. These findings offer support for a mood dependency effect on both the recognition and recall of visual information.

  3. A new method for text detection and recognition in indoor scene for assisting blind people

    NASA Astrophysics Data System (ADS)

    Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid

    2017-03-01

    Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.

  4. View-Based Models of 3D Object Recognition and Class-Specific Invariance

    DTIC Science & Technology

    1994-04-01

    underlie recognition of geon-like com- ponents (see Edelman, 1991 and Biederman , 1987 ). I(X -_ ta)II1y = (X - ta)TWTW(x -_ ta) (3) View-invariant features...Institute of Technology, 1993. neocortex. Biological Cybernetics, 1992. 14] I. Biederman . Recognition by components: a theory [20] B. Olshausen, C...Anderson, and D. Van Essen. A of human image understanding. Psychol. Review, neural model of visual attention and invariant pat- 94:115-147, 1987 . tern

  5. Cross spectral, active and passive approach to face recognition for improved performance

    NASA Astrophysics Data System (ADS)

    Grudzien, A.; Kowalski, M.; Szustakowski, M.

    2017-08-01

    Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.

  6. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation.

    PubMed

    Sountsov, Pavel; Santucci, David M; Lisman, John E

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.

  7. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation

    PubMed Central

    Sountsov, Pavel; Santucci, David M.; Lisman, John E.

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522

  8. How Chinese Semantics Capability Improves Interpretation in Visual Communication

    ERIC Educational Resources Information Center

    Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung

    2017-01-01

    A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…

  9. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  10. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  11. Building Hierarchical Representations for Oracle Character and Sketch Recognition.

    PubMed

    Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui

    2016-01-01

    In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.

  12. Structure, recognition and adaptive binding in RNA aptamer complexes.

    PubMed

    Patel, D J; Suri, A K; Jiang, F; Jiang, L; Fan, P; Kumar, R A; Nonin, S

    1997-10-10

    Novel features of RNA structure, recognition and discrimination have been recently elucidated through the solution structural characterization of RNA aptamers that bind cofactors, aminoglycoside antibiotics, amino acids and peptides with high affinity and specificity. This review presents the solution structures of RNA aptamer complexes with adenosine monophosphate, flavin mononucleotide, arginine/citrulline and tobramycin together with an example of hydrogen exchange measurements of the base-pair kinetics for the AMP-RNA aptamer complex. A comparative analysis of the structures of these RNA aptamer complexes yields the principles, patterns and diversity associated with RNA architecture, molecular recognition and adaptive binding associated with complex formation.

  13. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV).

    PubMed

    Bouman, Zita; Hendriks, Marc P H; Schmand, Ben A; Kessels, Roy P C; Aldenkamp, Albert P

    2016-01-01

    Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of suboptimal performance using an analogue study design. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals who were asked to simulate cognitive impairment as a result of a traumatic brain injury; the last group consisted of 50 healthy controls who were instructed to put forth full effort. Experimental malingerers performed significantly lower on all WMS-IV-NL tasks than did the patients and healthy controls. A binary logistic regression analysis was performed on the experimental malingerers and the patients. The first model contained the visual working memory subtests (Spatial Addition and Symbol Span) and the recognition tasks of the following subtests: Logical Memory, Verbal Paired Associates, Designs, Visual Reproduction. The results showed an overall classification rate of 78.4%, and only Spatial Addition explained a significant amount of variation (p < .001). Subsequent logistic regression analysis and receiver operating characteristic (ROC) analysis supported the discriminatory power of the subtest Spatial Addition. A scaled score cutoff of <4 produced 93% specificity and 52% sensitivity for detection of suboptimal performance. The WMS-IV-NL Spatial Addition subtest may provide clinically useful information for the detection of suboptimal performance.

  14. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    PubMed

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  15. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    PubMed Central

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716

  16. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  17. Perceptual asymmetries in greyscales: object-based versus space-based influences.

    PubMed

    Thomas, Nicole A; Elias, Lorin J

    2012-05-01

    Neurologically normal individuals exhibit leftward spatial biases, resulting from object- and space-based biases; however their relative contributions to the overall bias remain unknown. Relative position within the display has not often been considered, with similar spatial conditions being collapsed across. Study 1 used the greyscales task to investigate the influence of relative position and object- and space-based contributions. One image in each greyscale pair was shifted towards the left or the right. A leftward object-based bias moderated by a bias to the centre was expected. Results confirmed this as a left object-based bias occurred in the right visual field, where the left side of the greyscale pairs was located in the centre visual field. Further, only lower visual field images exhibited a significant left bias in the left visual field. The left bias was also stronger when images were partially overlapping in the right visual field, demonstrating the importance of examining proximity. The second study examined whether object-based biases were stronger when actual objects, with directional lighting biases, were used. Direction of luminosity was congruent or incongruent with spatial location. A stronger object-based bias emerged overall; however a leftward bias was seen in congruent conditions and a rightward bias was seen in incongruent conditions. In conditions with significant biases, the lower visual field image was chosen most often. Results show that object- and space-based biases both contribute; however stimulus type allows either space- or object-based biases to be stronger. A lower visual field bias also interacts with these biases, leading the left bias to be eliminated under certain conditions. The complex interaction occurring between frame of reference and visual field makes spatial location extremely important in determining the strength of the leftward bias. Copyright © 2010 Elsevier Srl. All rights reserved.

  18. Enantiospecific recognition of DNA sequences by a proflavine Tröger base.

    PubMed

    Bailly, C; Laine, W; Demeunynck, M; Lhomme, J

    2000-07-05

    The DNA interaction of a chiral Tröger base derived from proflavine was investigated by DNA melting temperature measurements and complementary biochemical assays. DNase I footprinting experiments demonstrate that the binding of the proflavine-based Tröger base is both enantio- and sequence-specific. The (+)-isomer poorly interacts with DNA in a non-sequence-selective fashion. In sharp contrast, the corresponding (-)-isomer recognizes preferentially certain DNA sequences containing both A. T and G. C base pairs, such as the motifs 5'-GTT. AAC and 5'-ATGA. TCAT. This is the first experimental demonstration that acridine-type Tröger bases can be used for enantiospecific recognition of DNA sequences. Copyright 2000 Academic Press.

  19. Neural Dissociation of Number from Letter Recognition and Its Relationship to Parietal Numerical Processing

    ERIC Educational Resources Information Center

    Park, Joonkoo; Hebrank, Andrew; Polk, Thad A.; Park, Denise C.

    2012-01-01

    The visual recognition of letters dissociates from the recognition of numbers at both the behavioral and neural level. In this article, using fMRI, we investigate whether the visual recognition of numbers dissociates from letters, thereby establishing a double dissociation. In Experiment 1, participants viewed strings of consonants and Arabic…

  20. Individual Differences in Visual Self-Recognition as a Function of Mother-Infant Attachment Relationship.

    ERIC Educational Resources Information Center

    Lewis, Michael; And Others

    1985-01-01

    Compares attachment relationships of infants at 12 months to their visual self-recognition at both 18 and 24 months. Individual differences in early attachment relations were related to later self-recognition. In particular, insecurely attached infants showed a trend toward earlier self-recognition than did securely attached infants. (Author/NH)

  1. Facial recognition using enhanced pixelized image for simulated visual prosthesis.

    PubMed

    Li, Ruonan; Zhhang, Xudong; Zhang, Hui; Hu, Guanshu

    2005-01-01

    A simulated face recognition experiment using enhanced pixelized images is designed and performed for the artificial visual prosthesis. The results of the simulation reveal new characteristics of visual performance in an enhanced pixelization condition, and then new suggestions on the future design of visual prosthesis are provided.

  2. Change blindness and visual memory: visual representations get rich and act poor.

    PubMed

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  3. Assessing a Metacognitive Account of Associative Memory Impairments in Temporal Lobe Epilepsy

    PubMed Central

    Kemp, Steven; Souchay, Céline; Moulin, Chris J. A.

    2016-01-01

    Previous research has pointed to a deficit in associative recognition in temporal lobe epilepsy (TLE). Associative recognition tasks require discrimination between various combinations of words which have and have not been seen previously (such as old-old or old-new pairs). People with TLE tend to respond to rearranged old-old pairs as if they are “intact” old-old pairs, which has been interpreted as a failure to use a recollection strategy to overcome the familiarity of two recombined words into a new pairing. We examined this specific deficit in the context of metacognition, using postdecision confidence judgements at test. We expected that TLE patients would show inappropriate levels of confidence for associative recognition. Although TLE patients reported lower confidence levels in their responses overall, they were sensitive to the difficulty of varying pair types in their judgements and gave significantly higher confidence ratings for their correct answers. We conclude that a strategic deficit is not at play in the associative recognition of people with TLE, insofar as they are able to monitor the status of their memory system. This adds to a growing body of research suggesting that recollection is impaired in TLE, but not metacognition. PMID:27721992

  4. Recall and recognition of verbal paired associates in early Alzheimer's disease.

    PubMed

    Lowndes, G J; Saling, M M; Ames, D; Chiu, E; Gonzalez, L M; Savage, G R

    2008-07-01

    The primary impairment in early Alzheimer's disease (AD) is encoding/consolidation, resulting from medial temporal lobe (MTL) pathology. AD patients perform poorly on cued-recall paired associate learning (PAL) tasks, which assess the ability of the MTLs to encode relational memory. Since encoding and retrieval processes are confounded within performance indexes on cued-recall PAL, its specificity for AD is limited. Recognition paradigms tend to show good specificity for AD, and are well tolerated, but are typically less sensitive than recall tasks. Associate-recognition is a novel PAL task requiring a combination of recall and recognition processes. We administered a verbal associate-recognition test and cued-recall analogue to 22 early AD patients and 55 elderly controls to compare their ability to discriminate these groups. Both paradigms used eight arbitrarily related word pairs (e.g., pool-teeth) with varying degrees of imageability. Associate-recognition was equally effective as the cued-recall analogue in discriminating the groups, and logistic regression demonstrated classification rates by both tasks were equivalent. These preliminary findings provide support for the clinical value of this recognition tool. Conceptually it has potential for greater specificity in informing neuropsychological diagnosis of AD in clinical samples but this requires further empirical support.

  5. Aging memories: differential decay of episodic memory components.

    PubMed

    Talamini, Lucia M; Gorree, Eva

    2012-05-17

    Some memories about events can persist for decades, even a lifetime. However, recent memories incorporate rich sensory information, including knowledge on the spatial and temporal ordering of event features, while old memories typically lack this "filmic" quality. We suggest that this apparent change in the nature of memories may reflect a preferential loss of hippocampus-dependent, configurational information over more cortically based memory components, including memory for individual objects. The current study systematically tests this hypothesis, using a new paradigm that allows the contemporaneous assessment of memory for objects, object pairings, and object-position conjunctions. Retention of each memory component was tested, at multiple intervals, up to 3 mo following encoding. The three memory subtasks adopted the same retrieval paradigm and were matched for initial difficulty. Results show differential decay of the tested episodic memory components, whereby memory for configurational aspects of a scene (objects' co-occurrence and object position) decays faster than memory for featured objects. Interestingly, memory requiring a visually detailed object representation decays at a similar rate as global object recognition, arguing against interpretations based on task difficulty and against the notion that (visual) detail is forgotten preferentially. These findings show that memories undergo qualitative changes as they age. More specifically, event memories become less configurational over time, preferentially losing some of the higher order associations that are dependent on the hippocampus for initial fast encoding. Implications for theories of long-term memory are discussed.

  6. Common constraints limit Korean and English character recognition in peripheral vision.

    PubMed

    He, Yingchen; Kwon, MiYoung; Legge, Gordon E

    2018-01-01

    The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition.

  7. Common constraints limit Korean and English character recognition in peripheral vision

    PubMed Central

    He, Yingchen; Kwon, MiYoung; Legge, Gordon E.

    2018-01-01

    The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition. PMID:29327041

  8. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  9. Profile of Executive and Memory Function Associated with Amphetamine and Opiate Dependence

    PubMed Central

    Ersche, Karen D; Clark, Luke; London, Mervyn; Robbins, Trevor W; Sahakian, Barbara J

    2007-01-01

    Cognitive function was assessed in chronic drug users on neurocognitive measures of executive and memory function. Current amphetamine users were contrasted with current opiate users, and these two groups were compared with former users of these substances (abstinent for at least one year). Four groups of participants were recruited: amphetamine-dependent individuals, opiate-dependent individuals, former users of amphetamines, and/or opiates and healthy non-drug taking controls. Participants were administered the Tower of London (TOL) planning task and the 3D-IDED attentional set-shifting task to assess executive function, and Paired Associates Learning and Delayed Pattern Recognition Memory tasks to assess visual memory function. The three groups of substance users showed significant impairments on TOL planning, Pattern Recognition Memory and Paired Associates Learning. Current amphetamine users displayed a greater degree of impairment than current opiate users. Consistent with previous research showing that healthy men are performing better on visuo-spatial tests than women, our male controls remembered significantly more paired associates than their female counterparts. This relationship was reversed in drug users. While performance of female drug users was normal, male drug users showed significant impairment compared to both their female counterparts and male controls. There was no difference in performance between current and former drug users. Neither years of drug abuse nor years of drug abstinence were associated with performance. Chronic drug users display pronounced neuropsychological impairment in the domains of executive and memory function. Impairment persists after several years of drug abstinence and may reflect neuropathology in frontal and temporal cortices. PMID:16160707

  10. Traffic Sign Detection Based on Biologically Visual Mechanism

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zhu, X.; Li, D.

    2012-07-01

    TSR (Traffic sign recognition) is an important problem in ITS (intelligent traffic system), which is being paid more and more attention for realizing drivers assisting system and unmanned vehicle etc. TSR consists of two steps: detection and recognition, and this paper describe a new traffic sign detection method. The design principle of the traffic sign is comply with the visual attention mechanism of human, so we propose a method using visual attention mechanism to detect traffic sign ,which is reasonable. In our method, the whole scene will firstly be analyzed by visual attention model to acquire the area where traffic signs might be placed. And then, these candidate areas will be analyzed according to the shape characteristics of the traffic sign to detect traffic signs. In traffic sign detection experiments, the result shows the proposed method is effectively and robust than other existing saliency detection method.

  11. Dissociations of the number and precision of visual short-term memory representations in change detection.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2017-11-01

    The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.

  12. Simple Smartphone-Based Guiding System for Visually Impaired People

    PubMed Central

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-01-01

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811

  13. Simple Smartphone-Based Guiding System for Visually Impaired People.

    PubMed

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-06-13

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  14. Multiple ways to the prior occurrence of an event: an electrophysiological dissociation of experimental and conceptually driven familiarity in recognition memory.

    PubMed

    Wiegand, Iris; Bader, Regine; Mecklinger, Axel

    2010-11-11

    Recent research has shown that familiarity contributes to associative memory when the to-be-associated stimuli are unitized during encoding. However, the specific processes underlying familiarity-based recognition of unitized representations are still indefinite. In this study, we present electrophysiologically dissociable early old/new effects, presumably related to two different kinds of familiarity inherent in associative recognition tasks. In a study-test associative recognition memory paradigm, we employed encoding conditions that established unitized representations of two pre-experimentally unrelated words, e.g. vegetable-bible. We compared event-related potentials (ERP) during the retrieval of these unitized word pairs using different retrieval cues. Word pairs presented in the same order as during unitization at encoding elicited a parietally distributed early old/new effect which we interpret as reflecting conceptually driven familiarity for newly formed concepts. Conversely, word pairs presented in reversed order only elicited a topographically dissociable early effect, i.e. the mid-frontal old/new effect, the putative correlate of experimental familiarity. The late parietal old/new effect, the putative ERP correlate of recollection, was obtained irrespective of word order, though it was larger for words presented in same order. These results indicate that familiarity may not be a unitary process and that different task demands can promote the assessment of conceptually driven familiarity for novel unitized concepts or experimentally-induced increments of experimental familiarity, respectively. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Interobject grouping facilitates visual awareness.

    PubMed

    Stein, Timo; Kaiser, Daniel; Peelen, Marius V

    2015-01-01

    In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.

  16. The development of newborn object recognition in fast and slow visual worlds

    PubMed Central

    Wood, Justin N.; Wood, Samantha M. W.

    2016-01-01

    Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925

  17. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  18. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  19. SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan

    2013-01-01

    Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108

  20. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Representations of Shape in Object Recognition and Long-Term Visual Memory

    DTIC Science & Technology

    1993-02-11

    in anything other than linguistic terms ( Biederman , 1987 , for example). STATUS 1. Viewpoint-Dependent Features in Object Representation Tarr and...is object- based orientation-independent representations sufficient for "basic-level" categorization ( Biederman , 1987 ; Corballis, 1988). Alternatively...space. REFERENCES Biederman , I. ( 1987 ). Recognition-by-components: A theory of human image understanding. Psychological Review, 94,115-147. Cooper, L

  2. TU-C-17A-03: An Integrated Contour Evaluation Software Tool Using Supervised Pattern Recognition for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Tan, J; Kavanaugh, J

    Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less

  3. The neural correlates of gist-based true and false recognition

    PubMed Central

    Gutchess, Angela H.; Schacter, Daniel L.

    2012-01-01

    When information is thematically related to previously studied information, gist-based processes contribute to false recognition. Using functional MRI, we examined the neural correlates of gist-based recognition as a function of increasing numbers of studied exemplars. Sixteen participants incidentally encoded small, medium, and large sets of pictures, and we compared the neural response at recognition using parametric modulation analyses. For hits, regions in middle occipital, middle temporal, and posterior parietal cortex linearly modulated their activity according to the number of related encoded items. For false alarms, visual, parietal, and hippocampal regions were modulated as a function of the encoded set size. The present results are consistent with prior work in that the neural regions supporting veridical memory also contribute to false memory for related information. The results also reveal that these regions respond to the degree of relatedness among similar items, and implicate perceptual and constructive processes in gist-based false memory. PMID:22155331

  4. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  5. Robust Indoor Human Activity Recognition Using Wireless Signals.

    PubMed

    Wang, Yi; Jiang, Xinli; Cao, Rongyu; Wang, Xiyang

    2015-07-15

    Wireless signals-based activity detection and recognition technology may be complementary to the existing vision-based methods, especially under the circumstance of occlusions, viewpoint change, complex background, lighting condition change, and so on. This paper explores the properties of the channel state information (CSI) of Wi-Fi signals, and presents a robust indoor daily human activity recognition framework with only one pair of transmission points (TP) and access points (AP). First of all, some indoor human actions are selected as primitive actions forming a training set. Then, an online filtering method is designed to make actions' CSI curves smooth and allow them to contain enough pattern information. Each primitive action pattern can be segmented from the outliers of its multi-input multi-output (MIMO) signals by a proposed segmentation method. Lastly, in online activities recognition, by selecting proper features and Support Vector Machine (SVM) based multi-classification, activities constituted by primitive actions can be recognized insensitive to the locations, orientations, and speeds.

  6. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  7. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition--A Computational Exploration

    ERIC Educational Resources Information Center

    Hsiao, Janet H.; Lam, Sze Man

    2013-01-01

    Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…

  8. Context-dependent similarity effects in letter recognition.

    PubMed

    Kinoshita, Sachiko; Robidoux, Serje; Guilbert, Daniel; Norris, Dennis

    2015-10-01

    In visual word recognition tasks, digit primes that are visually similar to letter string targets (e.g., 4/A, 8/B) are known to facilitate letter identification relative to visually dissimilar digits (e.g., 6/A, 7/B); in contrast, with letter primes, visual similarity effects have been elusive. In the present study we show that the visual similarity effect with letter primes can be made to come and go, depending on whether it is necessary to discriminate between visually similar letters. The results support a Bayesian view which regards letter recognition not as a passive activation process driven by the fixed stimulus properties, but as a dynamic evidence accumulation process for a decision that is guided by the task context.

  9. RNApdbee 2.0: multifunctional tool for RNA structure annotation.

    PubMed

    Zok, Tomasz; Antczak, Maciej; Zurkowski, Michal; Popenda, Mariusz; Blazewicz, Jacek; Adamiak, Ryszard W; Szachniuk, Marta

    2018-04-30

    In the field of RNA structural biology and bioinformatics, an access to correctly annotated RNA structure is of crucial importance, especially in the secondary and 3D structure predictions. RNApdbee webserver, introduced in 2014, primarily aimed to address the problem of RNA secondary structure extraction from the PDB files. Its new version, RNApdbee 2.0, is a highly advanced multifunctional tool for RNA structure annotation, revealing the relationship between RNA secondary and 3D structure given in the PDB or PDBx/mmCIF format. The upgraded version incorporates new algorithms for recognition and classification of high-ordered pseudoknots in large RNA structures. It allows analysis of isolated base pairs impact on RNA structure. It can visualize RNA secondary structures-including that of quadruplexes-with depiction of non-canonical interactions. It also annotates motifs to ease identification of stems, loops and single-stranded fragments in the input RNA structure. RNApdbee 2.0 is implemented as a publicly available webserver with an intuitive interface and can be freely accessed at http://rnapdbee.cs.put.poznan.pl/.

  10. Semantic memory influences episodic retrieval by increased familiarity.

    PubMed

    Wang, Yujuan; Mao, Xinrui; Li, Bingcan; Lu, Baoqing; Guo, Chunyan

    2016-07-06

    The role of familiarity in associative recognition has been investigated in a number of studies, which have indicated that familiarity can facilitate recognition under certain circumstances. The ability of a pre-experimentally existing common representation to boost the contribution of familiarity has rarely been investigated. In addition, although many studies have investigated the interactions between semantic memory and episodic retrieval, the conditions that influence the presence of specific patterns were unclear. This study aimed to address these two questions. We manipulated the degree of overlap between the two representations using synonym and nonsynonym pairs in an associative recognition task. Results indicated that an increased degree of overlap enhanced recognition performance. The analysis of event-related potentials effects in the test phase showed that synonym pairs elicited both types of old/rearranged effects, whereas nonsynonym pairs elicited a late old/rearranged effect. These results confirmed that a common representation, irrespective of source, was necessary for assuring the presence of familiarity, but a common representation could not distinguish associative recognition depending on familiarity alone. Moreover, our expected double dissociation between familiarity and recollection was absent, which indicated that mode selection may be influenced by the degree of distinctness between old and rearranged pairs rather than the degree of overlap between representations.

  11. Development of Encoding and Decision Processes in Visual Recognition.

    ERIC Educational Resources Information Center

    Newcombe, Nora; MacKenzie, Doris L.

    This experiment examined two processes which might account for developmental increases in accuracy in visual recognition tasks: age-related increases in efficiency of scanning during inspection, and age-related increases in the ability to make decisions systematically during test. Critical details necessary for recognition were highlighted as…

  12. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  13. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  14. Eye movements during object recognition in visual agnosia.

    PubMed

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. [Tachistoscope and dichotic listening test of the subject after the transection of the posterior part of the corpus callosum].

    PubMed

    Watanabe, S; Tasaki, H; Hojo, K; Yoshimura, I; Sato, T; Nakaoka, T; Iwabuchi, T

    1982-06-01

    The authors made neuropsychological studies by the tachistoscope and the dichotic listening test on a subject who had undergone the transection of the posterior part of the corpus callosum. As to the tachistoscopic recognition, stimulus material was composed with the various Japanese letters (Katakana, Hiragana, Kanji), various faces (variations of the eyebrow form and the mouth form) and various slopes of line. Table 1 shows results of the cases (the subject was the present case, subjects 1 and subject 2 were past cases). It was seen that the performance of the subject on Japanese letters tasks showed greater right visual field superiority than the one of subject 1 and subject 2. As to the auditory recognition, the tasks used for the dichotic listening test were the following (Table 2, 3, 4). Different digits (three pairs) of the subject showed greater right ear superiority (right ear: 61.1, left ear 5.9) than the ones of subject 1 and subject 2.

  16. Study on Impact Acoustic—Visual Sensor-Based Sorting of ELV Plastic Materials

    PubMed Central

    Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu

    2017-01-01

    This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles’ (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling. PMID:28594341

  17. Study on Impact Acoustic-Visual Sensor-Based Sorting of ELV Plastic Materials.

    PubMed

    Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu

    2017-06-08

    This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles' (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling.

  18. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  19. Using crypts as iris minutiae

    NASA Astrophysics Data System (ADS)

    Shen, Feng; Flynn, Patrick J.

    2013-05-01

    Iris recognition is one of the most reliable biometric technologies for identity recognition and verification, but it has not been used in a forensic context because the representation and matching of iris features are not straightforward for traditional iris recognition techniques. In this paper we concentrate on the iris crypt as a visible feature used to represent the characteristics of irises in a similar way to fingerprint minutiae. The matching of crypts is based on their appearances and locations. The number of matching crypt pairs found between two irises can be used for identity verification and the convenience of manual inspection makes iris crypts a potential candidate for forensic applications.

  20. Recognition of own-race and other-race faces by three-month-old infants.

    PubMed

    Sangrigoli, Sandy; De Schonen, Scania

    2004-10-01

    People are better at recognizing faces of their own race than faces of another race. Such race specificity may be due to differential expertise in the two races. In order to find out whether this other-race effect develops as early as face-recognition skills or whether it is a long-term effect of acquired expertise, we tested face recognition in 3-month-old Caucasian infants by conducting two experiments using Caucasian and Asiatic faces and a visual pair-comparison task. We hypothesized that if the other race effect develops together with face processing skills during the first months of life, the ability to recognize own-race faces will be greater than the ability to recognize other-race faces: 3-month-old Caucasian infants should be better at recognizing Caucasian faces than Asiatic faces. If, on the contrary, the other-race effect is the long-term result of acquired expertise, no difference between recognizing own- and other-race faces will be observed at that age. In Experiment 1, Caucasian infants were habituated to a single face. Recognition was assessed by a novelty preference paradigm. The infants' recognition performance was better for Caucasian than for Asiatic faces. In Experiment 2, Caucasian infants were familiarized with three individual faces. Recognition was demonstrated with both Caucasian and Asiatic faces. These results suggest that (i) the representation of face information by 3-month-olds may be race-experience-dependent (Experiment 1), and (ii) short-term familiarization with exemplars of another race group is sufficient to reduce the other-race effect and to extend the power of face processing (Experiment 2).

  1. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    PubMed

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  2. The visual discrimination of negative facial expressions by younger and older adults.

    PubMed

    Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley

    2013-04-05

    Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.

  3. Explaining seeing? Disentangling qualia from perceptual organization.

    PubMed

    Ibáñez, Agustin; Bekinschtein, Tristan

    2010-09-01

    Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.

  4. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  5. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. The Role of Anterior Nuclei of the Thalamus: A Subcortical Gate in Memory Processing: An Intracerebral Recording Study.

    PubMed

    Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan

    2015-01-01

    To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures.

  7. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  8. Incorporating a guanidine-modified cytosine base into triplex-forming PNAs for the recognition of a C-G pyrimidine–purine inversion site of an RNA duplex

    PubMed Central

    Toh, Desiree-Faye Kaixin; Devi, Gitali; Patil, Kiran M.; Qu, Qiuyu; Maraswami, Manikantha; Xiao, Yunyun; Loh, Teck Peng; Zhao, Yanli; Chen, Gang

    2016-01-01

    RNA duplex regions are often involved in tertiary interactions and protein binding and thus there is great potential in developing ligands that sequence-specifically bind to RNA duplexes. We have developed a convenient synthesis method for a modified peptide nucleic acid (PNA) monomer with a guanidine-modified 5-methyl cytosine base. We demonstrated by gel electrophoresis, fluorescence and thermal melting experiments that short PNAs incorporating the modified residue show high binding affinity and sequence specificity in the recognition of an RNA duplex containing an internal inverted Watson-Crick C-G base pair. Remarkably, the relatively short PNAs show no appreciable binding to DNA duplexes or single-stranded RNAs. The attached guanidine group stabilizes the base triple through hydrogen bonding with the G base in a C-G pair. Selective binding towards an RNA duplex over a single-stranded RNA can be rationalized by the fact that alkylation of the amine of a 5-methyl C base blocks the Watson–Crick edge. PNAs incorporating multiple guanidine-modified cytosine residues are able to enter HeLa cells without any transfection agent. PMID:27596599

  9. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  10. Visual speech discrimination and identification of natural and synthetic consonant stimuli

    PubMed Central

    Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.

    2015-01-01

    From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249

  11. 2-Methoxypyridine as a Thymidine Mimic in Watson-Crick Base Pairs of DNA and PNA: Synthesis, Thermal Stability, and NMR Structural Studies.

    PubMed

    Novosjolova, Irina; Kennedy, Scott D; Rozners, Eriks

    2017-11-02

    The development of nucleic acid base-pair analogues that use new modes of molecular recognition is important both for fundamental research and practical applications. The goal of this study was to evaluate 2-methoxypyridine as a cationic thymidine mimic in the A-T base pair. The hypothesis was that including protonation in the Watson-Crick base pairing scheme would enhance the thermal stability of the DNA double helix without compromising the sequence selectivity. DNA and peptide nucleic acid (PNA) sequences containing the new 2-methoxypyridine nucleobase (P) were synthesized and studied by using UV thermal melting and NMR spectroscopy. Introduction of P nucleobase caused a loss of thermal stability of ≈10 °C in DNA-DNA duplexes and ≈20 °C in PNA-DNA duplexes over a range of mildly acidic to neutral pH. Despite the decrease in thermal stability, the NMR structural studies showed that P-A formed the expected protonated base pair at pH 4.3. Our study demonstrates the feasibility of cationic unnatural base pairs; however, future optimization of such analogues will be required. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  13. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  14. Use of visual CO2 feedback as a retrofit solution for improving classroom air quality.

    PubMed

    Wargocki, P; Da Silva, N A F

    2015-02-01

    Carbon dioxide (CO2 ) sensors that provide a visual indication were installed in classrooms during normal school operation. During 2-week periods, teachers and students were instructed to open the windows in response to the visual CO2 feedback in 1 week and open them, as they would normally do, without visual feedback, in the other week. In the heating season, two pairs of classrooms were monitored, one pair naturally and the other pair mechanically ventilated. In the cooling season, two pairs of naturally ventilated classrooms were monitored, one pair with split cooling in operation and the other pair with no cooling. Classrooms were matched by grade. Providing visual CO2 feedback reduced CO2 levels, as more windows were opened in this condition. This increased energy use for heating and reduced the cooling requirement in summertime. Split cooling reduced the frequency of window opening only when no visual CO2 feedback was present. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Tzirakis, Panagiotis; Trigeorgis, George; Nicolaou, Mihalis A.; Schuller, Bjorn W.; Zafeiriou, Stefanos

    2017-12-01

    Automatic affect recognition is a challenging task due to the various modalities emotions can be expressed with. Applications can be found in many domains including multimedia retrieval and human computer interaction. In recent years, deep neural networks have been used with great success in determining emotional states. Inspired by this success, we propose an emotion recognition system using auditory and visual modalities. To capture the emotional content for various styles of speaking, robust features need to be extracted. To this purpose, we utilize a Convolutional Neural Network (CNN) to extract features from the speech, while for the visual modality a deep residual network (ResNet) of 50 layers. In addition to the importance of feature extraction, a machine learning algorithm needs also to be insensitive to outliers while being able to model the context. To tackle this problem, Long Short-Term Memory (LSTM) networks are utilized. The system is then trained in an end-to-end fashion where - by also taking advantage of the correlations of the each of the streams - we manage to significantly outperform the traditional approaches based on auditory and visual handcrafted features for the prediction of spontaneous and natural emotions on the RECOLA database of the AVEC 2016 research challenge on emotion recognition.

  16. Comparing visual representations across human fMRI and computational vision

    PubMed Central

    Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.

    2013-01-01

    Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227

  17. Electrophysiological Correlates of Familiarity and Recollection in Associative Recognition: Contributions of Perceptual and Conceptual Processing to Unitization

    PubMed Central

    Li, Bingcan; Mao, Xinrui; Wang, Yujuan; Guo, Chunyan

    2017-01-01

    It is generally accepted that associative recognition memory is supported by recollection. In addition, recent research indicates that familiarity can support associative memory, especially when two items are unitized into a single item. Both perceptual and conceptual manipulations can be used to unitize items, but few studies have compared these two methods of unitization directly. In the present study, we investigated the effects of familiarity and recollection on successful retrieval of items that were unitized perceptually or conceptually. Participants were instructed to remember either a Chinese two-character compound or unrelated word-pairs, which were presented simultaneously or sequentially. Participants were then asked to recognize whether word-pairs were intact or rearranged. Event-related potential (ERP) recordings were performed during the recognition phase of the study. Two-character compounds were better discriminated than unrelated word-pairs and simultaneous presentation was found to elicit better discrimination than sequential presentation for unrelated word-pairs only. ERP recordings indicated that the early intact/rearranged effects (FN400), typically associated with familiarity, were elicited in compound word-pairs with both simultaneous and sequential presentation, and in simultaneously presented unrelated word-pairs, but not in sequentially presented unrelated word-pairs. In contrast, the late positive complex (LPC) effects associated with recollection were elicited in all four conditions. Together, these results indicate that while the engagement of familiarity in associative recognition is affected by both perceptual and conceptual unitization, conceptual unitization promotes a higher level of unitization (LOU). In addition, the engagement of recollection was not affected by unitized manipulations. It should be noted, however, that due to experimental design, the effects presented here may be due to semantic rather than episodic memory and future studies should take this into consideration when manipulating rearranged pairs. PMID:28400723

  18. A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing

    PubMed Central

    Long, Chengjiang; Hua, Gang; Kapoor, Ashish

    2015-01-01

    We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892

  19. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.

    PubMed

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-06-01

    Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.

  20. Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia

    ERIC Educational Resources Information Center

    Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.

    2003-01-01

    The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…

  1. Visual recognition and inference using dynamic overcomplete sparse learning.

    PubMed

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  2. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  3. Infant Visual Recognition Memory: Independent Contributions of Speed and Attention.

    ERIC Educational Resources Information Center

    Rose, Susan A.; Feldman, Judith F.; Jankowski, Jeffery J.

    2003-01-01

    Examined contributions of cognitive processing speed, short-term memory capacity, and attention to infant visual recognition memory. Found that infants who showed better attention and faster processing had better recognition memory. Contributions of attention and processing speed were independent of one another and similar at all ages studied--5,…

  4. Double Dissociation of Pharmacologically Induced Deficits in Visual Recognition and Visual Discrimination Learning

    ERIC Educational Resources Information Center

    Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer

    2008-01-01

    Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by…

  5. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  6. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  7. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  8. The Processing of Visual and Phonological Configurations of Chinese One- and Two-Character Words in a Priming Task of Semantic Categorization.

    PubMed

    Ma, Bosen; Wang, Xiaoyun; Li, Degao

    2015-01-01

    To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.

  9. Metal-mediated DNA base pairing: alternatives to hydrogen-bonded Watson-Crick base pairs.

    PubMed

    Takezawa, Yusuke; Shionoya, Mitsuhiko

    2012-12-18

    With its capacity to store and transfer the genetic information within a sequence of monomers, DNA forms its central role in chemical evolution through replication and amplification. This elegant behavior is largely based on highly specific molecular recognition between nucleobases through the specific hydrogen bonds in the Watson-Crick base pairing system. While the native base pairs have been amazingly sophisticated through the long history of evolution, synthetic chemists have devoted considerable efforts to create alternative base pairing systems in recent decades. Most of these new systems were designed based on the shape complementarity of the pairs or the rearrangement of hydrogen-bonding patterns. We wondered whether metal coordination could serve as an alternative driving force for DNA base pairing and why hydrogen bonding was selected on Earth in the course of molecular evolution. Therefore, we envisioned an alternative design strategy: we replaced hydrogen bonding with another important scheme in biological systems, metal-coordination bonding. In this Account, we provide an overview of the chemistry of metal-mediated base pairing including basic concepts, molecular design, characteristic structures and properties, and possible applications of DNA-based molecular systems. We describe several examples of artificial metal-mediated base pairs, such as Cu(2+)-mediated hydroxypyridone base pair, H-Cu(2+)-H (where H denotes a hydroxypyridone-bearing nucleoside), developed by us and other researchers. To design the metallo-base pairs we carefully chose appropriate combinations of ligand-bearing nucleosides and metal ions. As expected from their stronger bonding through metal coordination, DNA duplexes possessing metallo-base pairs exhibited higher thermal stability than natural hydrogen-bonded DNAs. Furthermore, we could also use metal-mediated base pairs to construct or induce other high-order structures. These features could lead to metal-responsive functional DNA molecules such as artificial DNAzymes and DNA machines. In addition, the metallo-base pairing system is a powerful tool for the construction of homogeneous and heterogeneous metal arrays, which can lead to DNA-based nanomaterials such as electronic wires and magnetic devices. Recently researchers have investigated these systems as enzyme replacements, which may offer an additional contribution to chemical biology and synthetic biology through the expansion of the genetic alphabet.

  10. The development of object recognition memory in rhesus macaques with neonatal lesions of the perirhinal cortex.

    PubMed

    Zeamer, Alyson; Richardson, Rebecca L; Weiss, Alison R; Bachevalier, Jocelyne

    2015-02-01

    To investigate the role of the perirhinal cortex on the development of recognition measured by the visual paired-comparison (VPC) task, infant monkeys with neonatal perirhinal lesions and sham-operated controls were tested at 1.5, 6, 18, and 48 months of age on the VPC task with color stimuli and intermixed delays of 10 s, 30 s, 60 s, and 120 s. Monkeys with neonatal perirhinal lesions showed an increase in novelty preference between 1.5 and 6 months of age similar to controls, although at these two ages, performance remained significantly poorer than that of control animals. With age, performance in animals with neonatal perirhinal lesions deteriorated as compared to that of controls. In contrast to the lack of novelty preference in monkeys with perirhinal lesions acquired in adulthood, novelty preference in the neonatally operated animals remained above chance at all delays and all ages. The data suggest that, although incidental recognition memory processes can be supported by the perirhinal cortex in early infancy, other temporal cortical areas may support these processes in the absence of a functional perirhinal cortex early in development. The neural substrates mediating incidental recognition memory processes appear to be more widespread in early infancy than in adulthood. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM

    PubMed Central

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364

  12. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  13. Cultural differences in visual object recognition in 3-year-old children

    PubMed Central

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  14. Cultural differences in visual object recognition in 3-year-old children.

    PubMed

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.

    PubMed

    Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W

    2006-08-01

    Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.

  16. A Thiazole Coumarin (TC) Turn-On Fluorescence Probe for AT-Base Pair Detection and Multipurpose Applications in Different Biological Systems

    NASA Astrophysics Data System (ADS)

    Narayanaswamy, Nagarjun; Kumar, Manoj; Das, Sadhan; Sharma, Rahul; Samanta, Pralok K.; Pati, Swapan K.; Dhar, Suman K.; Kundu, Tapas K.; Govindaraju, T.

    2014-09-01

    Sequence-specific recognition of DNA by small turn-on fluorescence probes is a promising tool for bioimaging, bioanalytical and biomedical applications. Here, the authors report a novel cell-permeable and red fluorescent hemicyanine-based thiazole coumarin (TC) probe for DNA recognition, nuclear staining and cell cycle analysis. TC exhibited strong fluorescence enhancement in the presence of DNA containing AT-base pairs, but did not fluoresce with GC sequences, single-stranded DNA, RNA and proteins. The fluorescence staining of HeLa S3 and HEK 293 cells by TC followed by DNase and RNase digestion studies depicted the selective staining of DNA in the nucleus over the cytoplasmic region. Fluorescence-activated cell sorting (FACS) analysis by flow cytometry demonstrated the potential application of TC in cell cycle analysis in HEK 293 cells. Metaphase chromosome and malaria parasite DNA imaging studies further confirmed the in vivo diagnostic and therapeutic applications of probe TC. Probe TC may find multiple applications in fluorescence spectroscopy, diagnostics, bioimaging and molecular and cell biology.

  17. A Thiazole Coumarin (TC) Turn-On Fluorescence Probe for AT-Base Pair Detection and Multipurpose Applications in Different Biological Systems

    PubMed Central

    Narayanaswamy, Nagarjun; Kumar, Manoj; Das, Sadhan; Sharma, Rahul; Samanta, Pralok K.; Pati, Swapan K.; Dhar, Suman K.; Kundu, Tapas K.; Govindaraju, T.

    2014-01-01

    Sequence-specific recognition of DNA by small turn-on fluorescence probes is a promising tool for bioimaging, bioanalytical and biomedical applications. Here, the authors report a novel cell-permeable and red fluorescent hemicyanine-based thiazole coumarin (TC) probe for DNA recognition, nuclear staining and cell cycle analysis. TC exhibited strong fluorescence enhancement in the presence of DNA containing AT-base pairs, but did not fluoresce with GC sequences, single-stranded DNA, RNA and proteins. The fluorescence staining of HeLa S3 and HEK 293 cells by TC followed by DNase and RNase digestion studies depicted the selective staining of DNA in the nucleus over the cytoplasmic region. Fluorescence-activated cell sorting (FACS) analysis by flow cytometry demonstrated the potential application of TC in cell cycle analysis in HEK 293 cells. Metaphase chromosome and malaria parasite DNA imaging studies further confirmed the in vivo diagnostic and therapeutic applications of probe TC. Probe TC may find multiple applications in fluorescence spectroscopy, diagnostics, bioimaging and molecular and cell biology. PMID:25252596

  18. Strength-based criterion shifts in recognition memory.

    PubMed

    Singer, Murray

    2009-10-01

    In manipulations of stimulus strength between lists, a more lenient signal detection criterion is more frequently applied to a weak than to a strong stimulus class. However, with randomly intermixed weak and strong test probes, such a criterion shift often does not result. A procedure that has yielded delay-based within-list criterion shifts was applied to strength manipulations in recognition memory for categorized word lists. When participants made semantic ratings about each stimulus word, strength-based criterion shifts emerged regardless of whether words from pairs of categories were studied in separate blocks (Experiment 1) or in intermixed blocks (Experiment 2). In Experiment 3, the criterion shift persisted under the semantic-rating study task, but not under rote memorization. These findings suggest that continually adjusting the recognition decision criterion is cognitively feasible. They provide a technique for manipulating the criterion shift, and they identify competing theoretical accounts of these effects.

  19. Mechanism Underlying the Nucleobase-Distinguishing Ability of Benzopyridopyrimidine (BPP).

    PubMed

    Kochman, Michał A; Bil, Andrzej; Miller, R J Dwayne

    2017-11-02

    Benzopyridopyrimidine (BPP) is a fluorescent nucleobase analogue capable of forming base pairs with adenine (A) and guanine (G) at different sites. When incorporated into oligodeoxynucleotides, it is capable of differentiating between the two purine nucleobases by virtue of the fact that its fluorescence is largely quenched when it is base-paired to guanine, whereas base-pairing to adenine causes only a slight reduction of the fluorescence quantum yield. In the present article, the photophysics of BPP is investigated through computer simulations. BPP is found to be a good charge acceptor, as demonstrated by its positive and appreciably large electron affinity. The selective quenching process is attributed to charge transfer (CT) from the purine nucleobase, which is predicted to be efficient in the BPP-G base pair, but essentially inoperative in the BPP-A base pair. The CT process owes its high selectivity to a combination of two factors: the ionization potential of guanine is lower than that of adenine, and less obviously, the site occupied by guanine enables a greater stabilization of the CT state through electrostatic interactions than the one occupied by adenine. The case of BPP illustrates that molecular recognition via hydrogen bonding can enhance the selectivity of photoinduced CT processes.

  20. Cotinine improves visual recognition memory and decreases cortical Tau phosphorylation in the Tg6799 mice.

    PubMed

    Grizzell, J Alex; Patel, Sagar; Barreto, George E; Echeverria, Valentina

    2017-08-01

    Alzheimer's disease (AD) is associated with the progressive aggregation of hyperphosphorylated forms of the microtubule associated protein Tau in the central nervous system. Cotinine, the main metabolite of nicotine, reduced working memory deficits, synaptic loss, and amyloid β peptide aggregation into oligomers and plaques as well as inhibited the cerebral Tau kinase, glycogen synthase 3β (GSK3β) in the transgenic (Tg)6799 (5XFAD) mice. In this study, the effect of cotinine on visual recognition memory and cortical Tau phosphorylation at the GSK3β sites Serine (Ser)-396/Ser-404 and phospho-CREB were investigated in the Tg6799 and non-transgenic (NT) littermate mice. Tg mice showed short-term visual recognition memory impairment in the novel object recognition test, and higher levels of Tau phosphorylation when compared to NT mice. Cotinine significantly improved visual recognition memory performance increased CREB phosphorylation and reduced cortical Tau phosphorylation. Potential mechanisms underlying theses beneficial effects are discussed. Copyright © 2017. Published by Elsevier Inc.

  1. Toward a Unified Theory of Visual Area V4

    PubMed Central

    Roe, Anna W.; Chelazzi, Leonardo; Connor, Charles E.; Conway, Bevil R.; Fujita, Ichiro; Gallant, Jack L.; Lu, Haidong; Vanduffel, Wim

    2016-01-01

    Visual area V4 is a midtier cortical area in the ventral visual pathway. It is crucial for visual object recognition and has been a focus of many studies on visual attention. However, there is no unifying view of V4’s role in visual processing. Neither is there an understanding of how its role in feature processing interfaces with its role in visual attention. This review captures our current knowledge of V4, largely derived from electrophysiological and imaging studies in the macaque monkey. Based on recent discovery of functionally specific domains in V4, we propose that the unifying function of V4 circuitry is to enable selective extraction of specific functional domain-based networks, whether it be by bottom-up specification of object features or by top-down attentionally driven selection. PMID:22500626

  2. Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning.

    PubMed

    Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei

    2016-10-01

    Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.

  3. Acuity of a Cryptochrome and Vision-Based Magnetoreception System in Birds

    PubMed Central

    Solov'yov, Ilia A.; Mouritsen, Henrik; Schulten, Klaus

    2010-01-01

    Abstract The magnetic compass of birds is embedded in the visual system and it has been hypothesized that the primary sensory mechanism is based on a radical pair reaction. Previous models of magnetoreception have assumed that the radical pair-forming molecules are rigidly fixed in space, and this assumption has been a major objection to the suggested hypothesis. In this article, we investigate theoretically how much disorder is permitted for the radical pair-forming, protein-based magnetic compass in the eye to remain functional. Our study shows that only one rotational degree of freedom of the radical pair-forming protein needs to be partially constrained, while the other two rotational degrees of freedom do not impact the magnetoreceptive properties of the protein. The result implies that any membrane-associated protein is sufficiently restricted in its motion to function as a radical pair-based magnetoreceptor. We relate our theoretical findings to the cryptochromes, currently considered the likeliest candidate to furnish radical pair-based magnetoreception. PMID:20655831

  4. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  5. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  6. Verifying visual properties in sentence verification facilitates picture recognition memory.

    PubMed

    Pecher, Diane; Zanolie, Kiki; Zeelenberg, René

    2007-01-01

    According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.

  7. Measuring the Speed of Newborn Object Recognition in Controlled Visual Worlds

    ERIC Educational Resources Information Center

    Wood, Justin N.; Wood, Samantha M. W.

    2017-01-01

    How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled-rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks…

  8. Visual Object Detection, Categorization, and Identification Tasks Are Associated with Different Time Courses and Sensitivities

    ERIC Educational Resources Information Center

    de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros

    2011-01-01

    Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…

  9. THE EFFECT OF WORD ASSOCIATIONS ON THE RECOGNITION OF FLASHED WORDS.

    ERIC Educational Resources Information Center

    SAMUELS, S. JAY

    THE HYPOTHESIS THAT WHEN ASSOCIATED PAIRS OF WORDS ARE PRESENTED, SPEED OF RECOGNITION WILL BE FASTER THAN WHEN NONASSOCIATED WORD PAIRS ARE PRESENTED OR WHEN A TARGET WORD IS PRESENTED BY ITSELF WAS TESTED. TWENTY UNIVERSITY STUDENTS, INITIALLY SCREENED FOR VISION, WERE ASSIGNED RANDOMLY TO ROWS OF A 5 X 5 REPEATED-MEASURES LATIN SQUARE DESIGN.…

  10. Posture-based processing in visual short-term memory for actions.

    PubMed

    Vicary, Staci A; Stevens, Catherine J

    2014-01-01

    Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.

  11. The Role of Anterior Nuclei of the Thalamus: A Subcortical Gate in Memory Processing: An Intracerebral Recording Study

    PubMed Central

    Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan

    2015-01-01

    Objective To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. Methods We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. Results P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. Conclusions The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures. PMID:26529407

  12. A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders

    PubMed Central

    Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence

    2015-01-01

    Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928

  13. An Exemplar-Based Multi-View Domain Generalization Framework for Visual Recognition.

    PubMed

    Niu, Li; Li, Wen; Xu, Dong; Cai, Jianfei

    2018-02-01

    In this paper, we propose a new exemplar-based multi-view domain generalization (EMVDG) framework for visual recognition by learning robust classifier that are able to generalize well to arbitrary target domain based on the training samples with multiple types of features (i.e., multi-view features). In this framework, we aim to address two issues simultaneously. First, the distribution of training samples (i.e., the source domain) is often considerably different from that of testing samples (i.e., the target domain), so the performance of the classifiers learnt on the source domain may drop significantly on the target domain. Moreover, the testing data are often unseen during the training procedure. Second, when the training data are associated with multi-view features, the recognition performance can be further improved by exploiting the relation among multiple types of features. To address the first issue, considering that it has been shown that fusing multiple SVM classifiers can enhance the domain generalization ability, we build our EMVDG framework upon exemplar SVMs (ESVMs), in which a set of ESVM classifiers are learnt with each one trained based on one positive training sample and all the negative training samples. When the source domain contains multiple latent domains, the learnt ESVM classifiers are expected to be grouped into multiple clusters. To address the second issue, we propose two approaches under the EMVDG framework based on the consensus principle and the complementary principle, respectively. Specifically, we propose an EMVDG_CO method by adding a co-regularizer to enforce the cluster structures of ESVM classifiers on different views to be consistent based on the consensus principle. Inspired by multiple kernel learning, we also propose another EMVDG_MK method by fusing the ESVM classifiers from different views based on the complementary principle. In addition, we further extend our EMVDG framework to exemplar-based multi-view domain adaptation (EMVDA) framework when the unlabeled target domain data are available during the training procedure. The effectiveness of our EMVDG and EMVDA frameworks for visual recognition is clearly demonstrated by comprehensive experiments on three benchmark data sets.

  14. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    PubMed

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  15. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  16. View Combination: A Generalization Mechanism for Visual Recognition

    ERIC Educational Resources Information Center

    Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric

    2011-01-01

    We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…

  17. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  18. Scene recognition based on integrating active learning with dictionary learning

    NASA Astrophysics Data System (ADS)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  19. DYNAMIC PATTERN RECOGNITION BY MEANS OF THRESHOLD NETS,

    DTIC Science & Technology

    A method is expounded for the recognition of visual patterns. A circuit diagram of a device is described which is based on a multilayer threshold ...structure synthesized in accordance with the proposed method. Coded signals received each time an image is displayed are transmitted to the threshold ...circuit which distinguishes the signs, and from there to the layers of threshold resolving elements. The image at each layer is made to correspond

  20. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.

    PubMed

    Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B

    2017-07-01

    This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

  2. Development of visuo-haptic transfer for object recognition in typical preschool and school-aged children.

    PubMed

    Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca

    2018-07-01

    Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.

  3. [Visual hemifield differences in recognition of kanji and hiragana and its relation to hemispheric cerebral asymmetries].

    PubMed

    Miyazaki, T; Sugimoto, Y; Sato, H

    1990-07-01

    Visual hemifield differences in recognition of kanji and hiragana were studied on forty male right handers. A letter of kanji or hiragana was presented unilaterally to the right or left visual hemifield on a CRT display for 123 msec. A hundred and twenty recognition trials were performed for each subject using 20 well-acquainted kanji, 20 unfamiliar kanji and 20 hiragana. Kanji was more accurately recognized in the left visual hemifield than in the right hemifield. This tendency was more prominent in unfamiliar kanji compared with well-acquainted kanji. There were no visual hemifield differences in recognition of hiragana. Learning effects were observed for the right hemifield on kanji and both hemifields on hiragana. The results were discussed in relation to cerebral asymmetries of function. Kanji might be processed in the right cerebral hemisphere as geometric forms. The results on hiragana may be explained by mental set. It is suggested that modes of processing may be different between kanji and hiragana.

  4. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  5. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss.

    PubMed

    Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly

    2017-08-16

    This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

  6. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  7. Robust and Effective Component-based Banknote Recognition for the Blind

    PubMed Central

    Hasanuzzaman, Faiz M.; Yang, Xiaodong; Tian, YingLi

    2012-01-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. PMID:22661884

  8. A Motion-Based Feature for Event-Based Pattern Recognition

    PubMed Central

    Clady, Xavier; Maro, Jean-Matthieu; Barré, Sébastien; Benosman, Ryad B.

    2017-01-01

    This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating “spiking” events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition. PMID:28101001

  9. General Approach for Rock Classification Based on Digital Image Analysis of Electrical Borehole Wall Images

    NASA Astrophysics Data System (ADS)

    Linek, M.; Jungmann, M.; Berlage, T.; Clauser, C.

    2005-12-01

    Within the Ocean Drilling Program (ODP), image logging tools have been routinely deployed such as the Formation MicroScanner (FMS) or the Resistivity-At-Bit (RAB) tools. Both logging methods are based on resistivity measurements at the borehole wall and therefore are sensitive to conductivity contrasts, which are mapped in color scale images. These images are commonly used to study the structure of the sedimentary rocks and the oceanic crust (petrologic fabric, fractures, veins, etc.). So far, mapping of lithology from electrical images is purely based on visual inspection and subjective interpretation. We apply digital image analysis on electrical borehole wall images in order to develop a method, which augments objective rock identification. We focus on supervised textural pattern recognition which studies the spatial gray level distribution with respect to certain rock types. FMS image intervals of rock classes known from core data are taken in order to train textural characteristics for each class. A so-called gray level co-occurrence matrix is computed by counting the occurrence of a pair of gray levels that are a certain distant apart. Once the matrix for an image interval is computed, we calculate the image contrast, homogeneity, energy, and entropy. We assign characteristic textural features to different rock types by reducing the image information into a small set of descriptive features. Once a discriminating set of texture features for each rock type is found, we are able to discriminate the entire FMS images regarding the trained rock type classification. A rock classification based on texture features enables quantitative lithology mapping and is characterized by a high repeatability, in contrast to a purely visual subjective image interpretation. We show examples for the rock classification between breccias, pillows, massive units, and horizontally bedded tuffs based on ODP image data.

  10. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors.

    PubMed

    Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-01-14

    Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.

  12. Visual and Visuospatial Short-Term Memory in Mild Cognitive Impairment and Alzheimer Disease: Role of Attention

    ERIC Educational Resources Information Center

    Alescio-Lautier, B.; Michel, B. F.; Herrera, C.; Elahmadi, A.; Chambon, C.; Touzet, C.; Paban, V.

    2007-01-01

    It has been proposed that visual recognition memory and certain attentional mechanisms are impaired early in Alzheimer disease (AD). Little is known about visuospatial recognition memory in AD. The crucial role of the hippocampus on spatial memory and its damage in AD suggest that visuospatial recognition memory may also be impaired early. The aim…

  13. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  14. A Pilot Study of a Test for Visual Recognition Memory in Adults with Moderate to Severe Intellectual Disability

    ERIC Educational Resources Information Center

    Pyo, Geunyeong; Ala, Tom; Kyrouac, Gregory A.; Verhulst, Steven J.

    2010-01-01

    Objective assessment of memory functioning is an important part of evaluation for Dementia of Alzheimer Type (DAT). The revised Picture Recognition Memory Test (r-PRMT) is a test for visual recognition memory to assess memory functioning of persons with intellectual disabilities (ID), specifically targeting moderate to severe ID. A pilot study was…

  15. Twin hydroxymethyluracil-A base pair steps define the binding site for the DNA-binding protein TF1.

    PubMed

    Grove, A; Figueiredo, M L; Galeone, A; Mayol, L; Geiduschek, E P

    1997-05-16

    The DNA-bending protein TF1 is the Bacillus subtilis bacteriophage SPO1-encoded homolog of the bacterial HU proteins and the Escherichia coli integration host factor. We recently proposed that TF1, which binds with high affinity (Kd was approximately 3 nM) to preferred sites within the hydroxymethyluracil (hmU)-containing phage genome, identifies its binding sites based on sequence-dependent DNA flexibility. Here, we show that two hmU-A base pair steps coinciding with two previously proposed sites of DNA distortion are critical for complex formation. The affinity of TF1 is reduced 10-fold when both of these hmU-A base pair steps are replaced with A-hmU, G-C, or C-G steps; only modest changes in affinity result when substitutions are made at other base pairs of the TF1 binding site. Replacement of all hmU residues with thymine decreases the affinity of TF1 greatly; remarkably, the high affinity is restored when the two hmU-A base pair steps corresponding to previously suggested sites of distortion are reintroduced into otherwise T-containing DNA. T-DNA constructs with 3-base bulges spaced apart by 9 base pairs of duplex also generate nM affinity of TF1. We suggest that twin hmU-A base pair steps located at the proposed sites of distortion are key to target site selection by TF1 and that recognition is based largely, if not entirely, on sequence-dependent DNA flexibility.

  16. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  17. Structural basis for the recognition of guide RNA and target DNA heteroduplex by Argonaute

    PubMed Central

    Miyoshi, Tomohiro; Ito, Kosuke; Murakami, Ryo; Uchiumi, Toshio

    2016-01-01

    Argonaute proteins are key players in the gene silencing mechanisms mediated by small nucleic acids in all domains of life from bacteria to eukaryotes. However, little is known about the Argonaute protein that recognizes guide RNA/target DNA. Here, we determine the 2 Å crystal structure of Rhodobacter sphaeroides Argonaute (RsAgo) in a complex with 18-nucleotide guide RNA and its complementary target DNA. The heteroduplex maintains Watson–Crick base-pairing even in the 3′-region of the guide RNA between the N-terminal and PIWI domains, suggesting a recognition mode by RsAgo for stable interaction with the target strand. In addition, the MID/PIWI interface of RsAgo has a system that specifically recognizes the 5′ base-U of the guide RNA, and the duplex-recognition loop of the PAZ domain is important for the DNA silencing activity. Furthermore, we show that Argonaute discriminates the nucleic acid type (RNA/DNA) by recognition of the duplex structure of the seed region. PMID:27325485

  18. Structural basis for the recognition of guide RNA and target DNA heteroduplex by Argonaute.

    PubMed

    Miyoshi, Tomohiro; Ito, Kosuke; Murakami, Ryo; Uchiumi, Toshio

    2016-06-21

    Argonaute proteins are key players in the gene silencing mechanisms mediated by small nucleic acids in all domains of life from bacteria to eukaryotes. However, little is known about the Argonaute protein that recognizes guide RNA/target DNA. Here, we determine the 2 Å crystal structure of Rhodobacter sphaeroides Argonaute (RsAgo) in a complex with 18-nucleotide guide RNA and its complementary target DNA. The heteroduplex maintains Watson-Crick base-pairing even in the 3'-region of the guide RNA between the N-terminal and PIWI domains, suggesting a recognition mode by RsAgo for stable interaction with the target strand. In addition, the MID/PIWI interface of RsAgo has a system that specifically recognizes the 5' base-U of the guide RNA, and the duplex-recognition loop of the PAZ domain is important for the DNA silencing activity. Furthermore, we show that Argonaute discriminates the nucleic acid type (RNA/DNA) by recognition of the duplex structure of the seed region.

  19. A practical approach for writer-dependent symbol recognition using a writer-independent symbol recognizer.

    PubMed

    LaViola, Joseph J; Zeleznik, Robert C

    2007-11-01

    We present a practical technique for using a writer-independent recognition engine to improve the accuracy and speed while reducing the training requirements of a writer-dependent symbol recognizer. Our writer-dependent recognizer uses a set of binary classifiers based on the AdaBoost learning algorithm, one for each possible pairwise symbol comparison. Each classifier consists of a set of weak learners, one of which is based on a writer-independent handwriting recognizer. During online recognition, we also use the n-best list of the writer-independent recognizer to prune the set of possible symbols and thus reduce the number of required binary classifications. In this paper, we describe the geometric and statistical features used in our recognizer and our all-pairs classification algorithm. We also present the results of experiments that quantify the effect incorporating a writer-independent recognition engine into a writer-dependent recognizer has on accuracy, speed, and user training time.

  20. Long-term visual outcomes in extremely low-birth-weight children (an American Ophthalmological Society thesis).

    PubMed

    Spencer, Rand

    2006-01-01

    The goal is to analyze the long-term visual outcome of extremely low-birth-weight children. This is a retrospective analysis of eyes of extremely low-birth-weight children on whom vision testing was performed. Visual outcomes were studied by analyzing acuity outcomes at >/=36 months of adjusted age, correlating early acuity testing with final visual outcome and evaluating adverse risk factors for vision. Data from 278 eyes are included. Mean birth weight was 731g, and mean gestational age at birth was 26 weeks. 248 eyes had grating acuity outcomes measured at 73 +/- 36 months, and 183 eyes had recognition acuity testing at 76 +/- 39 months. 54% had below normal grating acuities, and 66% had below normal recognition acuities. 27% of grating outcomes and 17% of recognition outcomes were /=3 years of age. A slower-than-normal rate of early visual development was predictive of abnormal grating acuity (P < .0001) and abnormal recognition acuity (P < .0001) at >/=3 years of age. Eyes diagnosed with maximal retinopathy of prematurity in zone I had lower acuity outcomes (P = .0002) than did those with maximal retinopathy of prematurity in zone II/III. Eyes of children born at 28 weeks gestational age. Eyes of children with poorer general health after premature birth had a 5.3 times greater risk of abnormal recognition acuity. Long-term visual development in extremely low-birth-weight infants is problematic and associated with a high risk of subnormal acuity. Early acuity testing is useful in identifying children at greatest risk for long-term visual abnormalities. Gestational age at birth of

  1. When apperceptive agnosia is explained by a deficit of primary visual processing.

    PubMed

    Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta

    2014-03-01

    Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Two-Way Gene Interaction From Microarray Data Based on Correlation Methods

    PubMed Central

    Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh

    2016-01-01

    Background Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. Objectives The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. Materials and Methods In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman’s rank correlation coefficient and Blomqvist’s measure, and compared them with Pearson’s correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson’s correlation, Spearman’s rank correlation, and Blomqvist’s coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Results Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist’s coefficient was not confirmed by visual methods. Conclusions Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data. PMID:27621916

  3. When is the right hemisphere holistic and when is it not? The case of Chinese character recognition.

    PubMed

    Chung, Harry K S; Leung, Jacklyn C Y; Wong, Vienne M Y; Hsiao, Janet H

    2018-05-15

    Holistic processing (HP) has long been considered a characteristic of right hemisphere (RH) processing. Indeed, holistic face processing is typically associated with left visual field (LVF)/RH processing advantages. Nevertheless, expert Chinese character recognition involves reduced HP and increased RH lateralization, presenting a counterexample. Recent modeling research suggests that RH processing may be associated with an increase or decrease in HP, depending on whether spacing or component information was used respectively. Since expert Chinese character recognition involves increasing sensitivity to components while deemphasizing spacing information, RH processing in experts may be associated with weaker HP than novices. Consistent with this hypothesis, in a divided visual field paradigm, novices exhibited HP only in the LVF/RH, whereas experts showed no HP in either visual field. This result suggests that the RH may flexibly switch between part-based and holistic representations, consistent with recent fMRI findings. The RH's advantage in global/low spatial frequency processing is suggested to be relative to the task relevant frequency range. Thus, its use of holistic and part-based representations may depend on how attention is allocated for task relevant information. This study provides the first behavioral evidence showing how type of information used for processing modulates perceptual representations in the RH. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. The influence of writing practice on letter recognition in preschool children: a comparison between handwriting and typing.

    PubMed

    Longcamp, Marieke; Zerbato-Poudou, Marie-Thérèse; Velay, Jean-Luc

    2005-05-01

    A large body of data supports the view that movement plays a crucial role in letter representation and suggests that handwriting contributes to the visual recognition of letters. If so, changing the motor conditions while children are learning to write by using a method based on typing instead of handwriting should affect their subsequent letter recognition performances. In order to test this hypothesis, we trained two groups of 38 children (aged 3-5 years) to copy letters of the alphabet either by hand or by typing them. After three weeks of learning, we ran two recognition tests, one week apart, to compare the letter recognition performances of the two groups. The results showed that in the older children, the handwriting training gave rise to a better letter recognition than the typing training.

  5. Hippocampal Contribution to Implicit Configuration Memory Expressed via Eye Movements During Scene Exploration

    PubMed Central

    Ryals, Anthony J.; Wang, Jane X.; Polnaszek, Kelly L.; Voss, Joel L.

    2015-01-01

    Although hippocampus unequivocally supports explicit/ declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relational memory for complex scenes using eye-movement tracking during functional magnetic resonance imaging (fMRI) scanning. Participants studied scenes and were later tested using scenes that resembled study scenes in their overall feature configuration but comprised different elements. These configurally similar scenes were used to limit explicit memory, and were intermixed with new scenes that did not resemble studied scenes. Scene configuration memory was expressed through eye movements reflecting exploration overlap (EO), which is the viewing of the same scene locations at both study and test. EO reliably discriminated similar study-test scene pairs from study-new scene pairs, was reliably greater for similarity-based recognition hits than for misses, and correlated with hippocampal fMRI activity. In contrast, subjects could not reliably discriminate similar from new scenes by overt judgments, although ratings of familiarity were slightly higher for similar than new scenes. Hippocampal fMRI correlates of this weak explicit memory were distinct from EO-related activity. These findings collectively suggest that EO was an implicit expression of scene configuration memory associated with hippocampal activity. Visual exploration can therefore reflect implicit hippocampal-related memory processing that can be observed in eye-movement behavior during naturalistic scene viewing. PMID:25620526

  6. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    PubMed

    van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W

    2010-01-22

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  7. Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Tsukada, M.; Sato, K.

    2013-07-01

    This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

  8. PNA containing isocytidine nucleobase: synthesis and recognition of double helical RNA

    PubMed Central

    Zengeya, Thomas; Li, Ming; Rozners, Eriks

    2011-01-01

    Peptide nucleic acid (PNA1) containing a 5-methylisocytidine (iC) nucleobase has been synthesized. Triple helix formation between PNA1 and RNA hairpins having variable base pairs interacting with iC was studied using isothermal titration calorimetry. The iC nucleobase recognized the proposed target, C-G inversion in polypurine tract of RNA, with slightly higher affinity than the natural nucleobases, though the sequence selectivity of recognition was low. Compared to non-modified PNA, PNA1 had lower affinity for its RNA target. PMID:21333533

  9. Global RNA Fold and Molecular Recognition for a pfl Riboswitch Bound to ZMP, a Master Regulator of One-Carbon Metabolism

    DOE PAGES

    Ren, Aiming; Rajashankar, Kanagalaghatta R.; Patel, Dinshaw J.

    2015-06-25

    ZTP, the pyrophosphorylated analog of ZMP (5- amino-4-imidazole carboxamide ribose-5'-monophosphate), was identified as an alarmone that senses 10-formyl-tetrahydroflate deficiency in bacteria. Recently, a pfl riboswitch was identified that selectively binds ZMP and regulates genes associated with purine biosynthesis and one-carbon metabolism. Here we report on the structure of the ZMP-bound Thermosinus carboxydivorans pfl riboswitch sensing domain, thereby defining the pseudoknot-based tertiary RNA fold, the binding-pocket architecture, and principles underlying ligand recognition specificity. Molecular recognition involves shape complementarity, with the ZMP 5-amino and carboxamide groups paired with the Watson-Crick edge of an invariant uracil, and the imidazole ring sandwiched between guanines,more » while the sugar hydroxyls form intermolecular hydrogen bond contacts. The burial of the ZMP base and ribose moieties, together with unanticipated coordination of the carboxamide by Mg 2+, contrasts with exposure of the 5'-phosphate to solvent. Lastly, our studies highlight the principles underlying RNA-based recognition of ZMP, a master regulator of one-carbon metabolism.« less

  10. Global RNA Fold and Molecular Recognition for a pfl Riboswitch Bound to ZMP, a Master Regulator of One-Carbon Metabolism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Aiming; Rajashankar, Kanagalaghatta R.; Patel, Dinshaw J.

    ZTP, the pyrophosphorylated analog of ZMP (5- amino-4-imidazole carboxamide ribose-5'-monophosphate), was identified as an alarmone that senses 10-formyl-tetrahydroflate deficiency in bacteria. Recently, a pfl riboswitch was identified that selectively binds ZMP and regulates genes associated with purine biosynthesis and one-carbon metabolism. Here we report on the structure of the ZMP-bound Thermosinus carboxydivorans pfl riboswitch sensing domain, thereby defining the pseudoknot-based tertiary RNA fold, the binding-pocket architecture, and principles underlying ligand recognition specificity. Molecular recognition involves shape complementarity, with the ZMP 5-amino and carboxamide groups paired with the Watson-Crick edge of an invariant uracil, and the imidazole ring sandwiched between guanines,more » while the sugar hydroxyls form intermolecular hydrogen bond contacts. The burial of the ZMP base and ribose moieties, together with unanticipated coordination of the carboxamide by Mg 2+, contrasts with exposure of the 5'-phosphate to solvent. Lastly, our studies highlight the principles underlying RNA-based recognition of ZMP, a master regulator of one-carbon metabolism.« less

  11. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  12. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  13. Evolution of I-SceI Homing Endonucleases with Increased DNA Recognition Site Specificity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Rakesh; Ho, Kwok Ki; Tenney, Kristen

    2013-09-18

    Elucidating how homing endonucleases undergo changes in recognition site specificity will facilitate efforts to engineer proteins for gene therapy applications. I-SceI is a monomeric homing endonuclease that recognizes and cleaves within an 18-bp target. It tolerates limited degeneracy in its target sequence, including substitution of a C:G{sub +4} base pair for the wild-type A:T{sub +4} base pair. Libraries encoding randomized amino acids at I-SceI residue positions that contact or are proximal to A:T{sub +4} were used in conjunction with a bacterial one-hybrid system to select I-SceI derivatives that bind to recognition sites containing either the A:T{sub +4} or the C:G{submore » +4} base pairs. As expected, isolates encoding wild-type residues at the randomized positions were selected using either target sequence. All I-SceI proteins isolated using the C:G{sub +4} recognition site included small side-chain substitutions at G100 and either contained (K86R/G100T, K86R/G100S and K86R/G100C) or lacked (G100A, G100T) a K86R substitution. Interestingly, the binding affinities of the selected variants for the wild-type A:T{sub +4} target are 4- to 11-fold lower than that of wild-type I-SceI, whereas those for the C:G{sub +4} target are similar. The increased specificity of the mutant proteins is also evident in binding experiments in vivo. These differences in binding affinities account for the observed -36-fold difference in target preference between the K86R/G100T and wild-type proteins in DNA cleavage assays. An X-ray crystal structure of the K86R/G100T mutant protein bound to a DNA duplex containing the C:G{sub +4} substitution suggests how sequence specificity of a homing enzyme can increase. This biochemical and structural analysis defines one pathway by which site specificity is augmented for a homing endonuclease.« less

  14. Local visual perception bias in children with high-functioning autism spectrum disorders; do we have the whole picture?

    PubMed

    Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn

    2016-01-01

    While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.

  15. Visual Associative Learning in Restrained Honey Bees with Intact Antennae

    PubMed Central

    Dobrin, Scott E.; Fahrbach, Susan E.

    2012-01-01

    A restrained honey bee can be trained to extend its proboscis in response to the pairing of an odor with a sucrose reward, a form of olfactory associative learning referred to as the proboscis extension response (PER). Although the ability of flying honey bees to respond to visual cues is well-established, associative visual learning in restrained honey bees has been challenging to demonstrate. Those few groups that have documented vision-based PER have reported that removing the antennae prior to training is a prerequisite for learning. Here we report, for a simple visual learning task, the first successful performance by restrained honey bees with intact antennae. Honey bee foragers were trained on a differential visual association task by pairing the presentation of a blue light with a sucrose reward and leaving the presentation of a green light unrewarded. A negative correlation was found between age of foragers and their performance in the visual PER task. Using the adaptations to the traditional PER task outlined here, future studies can exploit pharmacological and physiological techniques to explore the neural circuit basis of visual learning in the honey bee. PMID:22701575

  16. Visual encoding impairment in patients with schizophrenia: contribution of reduced working memory span, decreased processing speed, and affective symptoms.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Huerta-Ramos, Elena; Ochoa, Susana; Usall, Judith; Abellán-Vega, Helena; Roca, Mercedes; Haro, Josep Maria

    2015-01-01

    Previous research has revealed the contribution of decreased processing speed and reduced working memory span in verbal and visual memory impairment in patients with schizophrenia. The role of affective symptoms in verbal memory has also emerged in a few studies. The authors designed a picture recognition task to investigate the impact of these factors on visual encoding. Two types of pictures (black and white vs. colored) were presented under 2 different conditions of context encoding (either displayed at a specific location or in association with another visual stimulus). It was assumed that the process of encoding associated pictures was more effortful than that of encoding pictures that were presented alone. Working memory span and processing speed were assessed. In the patient group, working memory span was significantly associated with the recognition of the associated pictures but not significantly with that of the other pictures. Controlling for processing speed eliminated the patients' deficit in the recognition of the colored pictures and greatly reduced their deficit in the recognition of the black-and-white pictures. The recognition of the black-and-white pictures was inversely related to anxiety in men and to depression in women. Working memory span constrains the effortful visual encoding processes in patients, whereas processing speed decrement accounts for most of their visual encoding deficit. Affective symptoms also have an impact on visual encoding, albeit differently in men and women. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  17. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  18. Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors

    NASA Astrophysics Data System (ADS)

    Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.

    2010-03-01

    The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.

  19. Base pairing and base mis-pairing in nucleic acids

    NASA Technical Reports Server (NTRS)

    Wang, A. H. J.; Rich, A.

    1986-01-01

    In recent years we have learned that DNA is conformationally active. It can exist in a number of different stable conformations including both right-handed and left-handed forms. Using single crystal X-ray diffraction analysis we are able to discover not only additional conformations of the nucleic acids but also different types of hydrogen bonded base-base interactions. Although Watson-Crick base pairings are the predominant type of interaction in double helical DNA, they are not the only types. Recently, we have been able to examine mismatching of guanine-thymine base pairs in left-handed Z-DNA at atomic resolution (1A). A minimum amount of distortion of the sugar phosphate backbone is found in the G x T pairing in which the bases are held together by two hydrogen bonds in the wobble pairing interaction. Because of the high resolution of the analysis we can visualize water molecules which fill in to accommodate the other hydrogen bonding positions in the bases which are not used in the base-base interactions. Studies on other DNA oligomers have revealed that other types of non-Watson-Crick hydrogen bonding interactions can occur. In the structure of a DNA octamer with the sequence d(GCGTACGC) complexed to an antibiotic triostin A, it was found that the two central AT base pairs are held together by Hoogsteen rather than Watson-Crick base pairs. Similarly, the G x C base pairs at the ends are also Hoogsteen rather than Watson-Crick pairing. Hoogsteen base pairs make a modified helix which is distinct from the Watson-Crick double helix.

  20. Visual Recognition Software for Binary Classification and Its Application to Spruce Pollen Identification

    PubMed Central

    Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.

    2016-01-01

    Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017

  1. Sensory Contributions to Impaired Emotion Processing in Schizophrenia

    PubMed Central

    Butler, Pamela D.; Abeles, Ilana Y.; Weiskopf, Nicole G.; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E.; Zemon, Vance; Loughead, James; Gur, Ruben C.; Javitt, Daniel C.

    2009-01-01

    Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective. PMID:19793797

  2. Sensory contributions to impaired emotion processing in schizophrenia.

    PubMed

    Butler, Pamela D; Abeles, Ilana Y; Weiskopf, Nicole G; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E; Zemon, Vance; Loughead, James; Gur, Ruben C; Javitt, Daniel C

    2009-11-01

    Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective.

  3. Flexible conceptual combination: Electrophysiological correlates and consequences for associative memory

    PubMed Central

    Lucas, Heather D.; Hubbard, Ryan J.; Federmeier, Kara D.

    2017-01-01

    When meaningful stimuli such as words are encountered in groups or pairs (e.g., “elephant-ferry”), they can be processed either separately or as an integrated concept (“an elephant ferry”). Prior research suggests that memory for integrated associations is supported by different mechanisms than is memory for nonintegrated associations. However, little is known about the neurocognitive mechanisms that support the integration of novel stimulus pairs. We recorded ERPs while participants memorized sequentially presented, unrelated noun pairs using a strategy that either did or did not involve attempting to construct coherent definitions. We varied the concreteness of the first noun in each pair to examine whether conceptual combination instructions would induce compositional concreteness effects, or differences in ERPs evoked by the second noun as a function of the concreteness of the first noun. We found that the conceptual combination task, but not the noncombinatory encoding task, produced compositional concreteness effects on a late frontal negativity previously linked to visual imagery. Moreover, word pairs studied under conceptual combination instructions showed evidence of more unitized or holistic memory representations on associative recognition and free recall tests. Finally, item analyses indicated that (a) items with higher normed imageability ratings were rated by participants as easier to conceptually combine, and (b) in the conceptual combination task, ease-of-combination ratings mediated an indirect relationship between imageability and subsequent associative memory. These data are suggestive of a role of compositional imagery in the online formation of novel concepts via conceptual combination. PMID:28191647

  4. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  5. Improving Protein Fold Recognition by Deep Learning Networks.

    PubMed

    Jo, Taeho; Hou, Jie; Eickholt, Jesse; Cheng, Jianlin

    2015-12-04

    For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.

  6. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Mechanisms of object recognition: what we have learned from pigeons

    PubMed Central

    Soto, Fabian A.; Wasserman, Edward A.

    2014-01-01

    Behavioral studies of object recognition in pigeons have been conducted for 50 years, yielding a large body of data. Recent work has been directed toward synthesizing this evidence and understanding the visual, associative, and cognitive mechanisms that are involved. The outcome is that pigeons are likely to be the non-primate species for which the computational mechanisms of object recognition are best understood. Here, we review this research and suggest that a core set of mechanisms for object recognition might be present in all vertebrates, including pigeons and people, making pigeons an excellent candidate model to study the neural mechanisms of object recognition. Behavioral and computational evidence suggests that error-driven learning participates in object category learning by pigeons and people, and recent neuroscientific research suggests that the basal ganglia, which are homologous in these species, may implement error-driven learning of stimulus-response associations. Furthermore, learning of abstract category representations can be observed in pigeons and other vertebrates. Finally, there is evidence that feedforward visual processing, a central mechanism in models of object recognition in the primate ventral stream, plays a role in object recognition by pigeons. We also highlight differences between pigeons and people in object recognition abilities, and propose candidate adaptive specializations which may explain them, such as holistic face processing and rule-based category learning in primates. From a modern comparative perspective, such specializations are to be expected regardless of the model species under study. The fact that we have a good idea of which aspects of object recognition differ in people and pigeons should be seen as an advantage over other animal models. From this perspective, we suggest that there is much to learn about human object recognition from studying the “simple” brains of pigeons. PMID:25352784

  8. Effects of Emotion on Associative Recognition: Valence and Retention Interval Matter

    PubMed Central

    Pierce, Benton H.; Kensinger, Elizabeth A.

    2011-01-01

    In two experiments, we examined the effects of emotional valence and arousal on associative binding. Participants studied negative, positive, and neutral word pairs, followed by an associative recognition test. In Experiment 1, with a short-delayed test, accuracy for intact pairs was equivalent across valences, whereas accuracy for rearranged pairs was lower for negative than for positive and neutral pairs. In Experiment 2, we tested participants after a one-week delay and found that accuracy was greater for intact negative than for intact neutral pairs, whereas rearranged pair accuracy was equivalent across valences. These results suggest that, although negative emotional valence impairs associative binding after a short delay, it may improve binding after a longer delay. The results also suggest that valence, as well as arousal, needs to be considered when examining the effects of emotion on associative memory. PMID:21401233

  9. Affective and contextual values modulate spatial frequency use in object recognition

    PubMed Central

    Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno

    2014-01-01

    Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514

  10. Visual body recognition in a prosopagnosic patient.

    PubMed

    Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M

    2012-01-01

    Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  12. Flight Deck-Based Delegated Separation: Evaluation of an On-Board Interval Management System with Synthetic and Enhanced Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.; Norman, Rober M.; Ellis, Kyle K. E.; Barmore, Bryan E.

    2011-01-01

    An emerging Next Generation Air Transportation System concept - Equivalent Visual Operations (EVO) - can be achieved using an electronic means to provide sufficient visibility of the external world and other required flight references on flight deck displays that enable the safety, operational tempos, and visual flight rules (VFR)-like procedures for all weather conditions. Synthetic and enhanced flight vision system technologies are critical enabling technologies to EVO. Current research evaluated concepts for flight deck-based interval management (FIM) operations, integrated with Synthetic Vision and Enhanced Vision flight-deck displays and technologies. One concept involves delegated flight deck-based separation, in which the flight crews were paired with another aircraft and responsible for spacing and maintaining separation from the paired aircraft, termed, "equivalent visual separation." The operation required the flight crews to acquire and maintain an "equivalent visual contact" as well as to conduct manual landings in low-visibility conditions. The paper describes results that evaluated the concept of EVO delegated separation, including an off-nominal scenario in which the lead aircraft was not able to conform to the assigned spacing resulting in a loss of separation.

  13. Sketching for Military Courses of Action Diagrams

    DTIC Science & Technology

    2003-01-01

    the glyph bar and (optionally) spoken input2. Avoiding the need for recognition in glyphs Glyphs in nuSketch systems have two parts. The ink is the...time-stamped collection of ink strokes that comprise the base- level visual representation of the glyph. The content of the glyph is an entity in...preferred having a neat symbol drawn where they wanted it. Those who had tried ink recognition systems particularly appreciated never having to

  14. Surface versus Edge-Based Determinants of Visual Recognition.

    ERIC Educational Resources Information Center

    Biederman, Irving; Ju, Ginny

    1988-01-01

    The latency at which objects could be identified by 126 subjects was compared through line drawings (edge-based) or color photography (surface depiction). The line drawing was identified about as quickly as the photograph; primal access to a mental representation of an object can be modeled from an edge-based description. (SLD)

  15. Cross-modal individual recognition in wild African lions.

    PubMed

    Gilfillan, Geoffrey; Vitale, Jessica; McNutt, John Weldon; McComb, Karen

    2016-08-01

    Individual recognition is considered to have been fundamental in the evolution of complex social systems and is thought to be a widespread ability throughout the animal kingdom. Although robust evidence for individual recognition remains limited, recent experimental paradigms that examine cross-modal processing have demonstrated individual recognition in a range of captive non-human animals. It is now highly relevant to test whether cross-modal individual recognition exists within wild populations and thus examine how it is employed during natural social interactions. We address this question by testing audio-visual cross-modal individual recognition in wild African lions (Panthera leo) using an expectancy-violation paradigm. When presented with a scenario where the playback of a loud-call (roaring) broadcast from behind a visual block is incongruent with the conspecific previously seen there, subjects responded more strongly than during the congruent scenario where the call and individual matched. These findings suggest that lions are capable of audio-visual cross-modal individual recognition and provide a useful method for studying this ability in wild populations. © 2016 The Author(s).

  16. Aging and solid shape recognition: Vision and haptics.

    PubMed

    Norman, J Farley; Cheeseman, Jacob R; Adkins, Olivia C; Cox, Andrea G; Rogers, Connor E; Dowell, Catherine J; Baxter, Michael W; Norman, Hideko F; Reyes, Cecia M

    2015-10-01

    The ability of 114 younger and older adults to recognize naturally-shaped objects was evaluated in three experiments. The participants viewed or haptically explored six randomly-chosen bell peppers (Capsicum annuum) in a study session and were later required to judge whether each of twelve bell peppers was "old" (previously presented during the study session) or "new" (not presented during the study session). When recognition memory was tested immediately after study, the younger adults' (Experiment 1) performance for vision and haptics was identical when the individual study objects were presented once. Vision became superior to haptics, however, when the individual study objects were presented multiple times. When 10- and 20-min delays (Experiment 2) were inserted in between study and test sessions, no significant differences occurred between vision and haptics: recognition performance in both modalities was comparable. When the recognition performance of older adults was evaluated (Experiment 3), a negative effect of age was found for visual shape recognition (younger adults' overall recognition performance was 60% higher). There was no age effect, however, for haptic shape recognition. The results of the present experiments indicate that the visual recognition of natural object shape is different from haptic recognition in multiple ways: visual shape recognition can be superior to that of haptics and is affected by aging, while haptic shape recognition is less accurate and unaffected by aging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Hybrid Speaker Recognition Using Universal Acoustic Model

    NASA Astrophysics Data System (ADS)

    Nishimura, Jun; Kuroda, Tadahiro

    We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.

  18. Transformations in the Recognition of Visual Forms

    ERIC Educational Resources Information Center

    Charness, Neil; Bregman, Albert S.

    1973-01-01

    In a study which required college students to learn to recognize four flexible plastic shapes photographed on different backgrounds from different angles, the importance of a context-rich environment for the learning and recognition of visual patterns was illustrated. (Author)

  19. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    PubMed Central

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  20. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  1. Intact anger recognition in depression despite aberrant visual facial information usage.

    PubMed

    Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M

    2014-08-01

    Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Multitasking During Degraded Speech Recognition in School-Age Children

    PubMed Central

    Ward, Kristina M.; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890

  3. Differential effects of m1 and m2 receptor antagonists in perirhinal cortex on visual recognition memory in monkeys

    PubMed Central

    Wu, Wei; Saunders, Richard C.; Mishkin, Mortimer; Turchi, Janita

    2012-01-01

    Microinfusions of the nonselective muscarinic antagonist scopolamine into perirhinal cortex impairs performance on visual recognition tasks, indicating that muscarinic receptors in this region play a pivotal role in recognition memory. To assess the mnemonic effects of selective blockade in perirhinal cortex of muscarinic receptor subtypes, we locally infused either the m1-selective antagonist pirenzepine or the m2-selective antagonist methoctramine in animals performing one-trial visual recognition, and compared these scores with those following infusions of equivalent volumes of saline. Compared to these control infusions, injections of pirenzepine, but not of methoctramine, significantly impaired recognition accuracy. Further, similar doses of scopolamine and pirenzepine yielded similar deficits, suggesting that the deficits obtained earlier with scopolamine were due mainly, if not exclusively, to blockade of m1 receptors. The present findings indicate that m1 and m2 receptors have functionally dissociable roles, and that the formation of new visual memories is critically dependent on the cholinergic activation of m1 receptors located on perirhinal cells. PMID:22561485

  4. Differential effects of m1 and m2 receptor antagonists in perirhinal cortex on visual recognition memory in monkeys.

    PubMed

    Wu, Wei; Saunders, Richard C; Mishkin, Mortimer; Turchi, Janita

    2012-07-01

    Microinfusions of the nonselective muscarinic antagonist scopolamine into perirhinal cortex impairs performance on visual recognition tasks, indicating that muscarinic receptors in this region play a pivotal role in recognition memory. To assess the mnemonic effects of selective blockade in perirhinal cortex of muscarinic receptor subtypes, we locally infused either the m1-selective antagonist pirenzepine or the m2-selective antagonist methoctramine in animals performing one-trial visual recognition, and compared these scores with those following infusions of equivalent volumes of saline. Compared to these control infusions, injections of pirenzepine, but not of methoctramine, significantly impaired recognition accuracy. Further, similar doses of scopolamine and pirenzepine yielded similar deficits, suggesting that the deficits obtained earlier with scopolamine were due mainly, if not exclusively, to blockade of m1 receptors. The present findings indicate that m1 and m2 receptors have functionally dissociable roles, and that the formation of new visual memories is critically dependent on the cholinergic activation of m1 receptors located on perirhinal cells. Published by Elsevier Inc.

  5. Multitasking During Degraded Speech Recognition in School-Age Children.

    PubMed

    Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.

  6. Learning and recognition of on-premise signs from weakly labeled street view images.

    PubMed

    Tsai, Tsung-Hung; Cheng, Wen-Huang; You, Chuang-Wen; Hu, Min-Chun; Tsui, Arvin Wen; Chi, Heng-Yu

    2014-03-01

    Camera-enabled mobile devices are commonly used as interaction platforms for linking the user's virtual and physical worlds in numerous research and commercial applications, such as serving an augmented reality interface for mobile information retrieval. The various application scenarios give rise to a key technique of daily life visual object recognition. On-premise signs (OPSs), a popular form of commercial advertising, are widely used in our living life. The OPSs often exhibit great visual diversity (e.g., appearing in arbitrary size), accompanied with complex environmental conditions (e.g., foreground and background clutter). Observing that such real-world characteristics are lacking in most of the existing image data sets, in this paper, we first proposed an OPS data set, namely OPS-62, in which totally 4649 OPS images of 62 different businesses are collected from Google's Street View. Further, for addressing the problem of real-world OPS learning and recognition, we developed a probabilistic framework based on the distributional clustering, in which we proposed to exploit the distributional information of each visual feature (the distribution of its associated OPS labels) as a reliable selection criterion for building discriminative OPS models. Experiments on the OPS-62 data set demonstrated the outperformance of our approach over the state-of-the-art probabilistic latent semantic analysis models for more accurate recognitions and less false alarms, with a significant 151.28% relative improvement in the average recognition rate. Meanwhile, our approach is simple, linear, and can be executed in a parallel fashion, making it practical and scalable for large-scale multimedia applications.

  7. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report

    PubMed Central

    Poth, Christian H.; Schneider, Werner X.

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM. PMID:27713722

  8. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report.

    PubMed

    Poth, Christian H; Schneider, Werner X

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM.

  9. Across Space and Time: Infants Learn from Backward and Forward Visual Statistics

    ERIC Educational Resources Information Center

    Tummeltshammer, Kristen; Amso, Dima; French, Robert M.; Kirkham, Natasha Z.

    2017-01-01

    This study investigates whether infants are sensitive to backward and forward transitional probabilities within temporal and spatial visual streams. Two groups of 8-month-old infants were familiarized with an artificial grammar of shapes, comprising backward and forward base pairs (i.e. two shapes linked by strong backward or forward transitional…

  10. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    PubMed Central

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  11. Automatic Activation of Phonological Code during Visual Word Recognition in Children: A Masked Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Perre, Laetitia; Casalis, Séverine

    2017-01-01

    The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…

  12. Native-Language Phonological Interference in Early Hakka-Mandarin Bilinguals' Visual Recognition of Chinese Two-Character Compounds: Evidence from the Semantic-Relatedness Decision Task

    ERIC Educational Resources Information Center

    Wu, Shiyu; Ma, Zheng

    2017-01-01

    Previous research has indicated that, in viewing a visual word, the activated phonological representation in turn activates its homophone, causing semantic interference. Using this mechanism of phonological mediation, this study investigated native-language phonological interference in visual recognition of Chinese two-character compounds by early…

  13. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille.

    PubMed

    Fischer-Baum, Simon; Englebretson, Robert

    2016-08-01

    Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Abdominal Tumor Characterization and Recognition Using Superior-Order Cooccurrence Matrices, Based on Ultrasound Images

    PubMed Central

    Mitrea, Delia; Mitrea, Paulina; Nedevschi, Sergiu; Badea, Radu; Lupsor, Monica; Socaciu, Mihai; Golea, Adela; Hagiu, Claudia; Ciobanu, Lidia

    2012-01-01

    The noninvasive diagnosis of the malignant tumors is an important issue in research nowadays. Our purpose is to elaborate computerized, texture-based methods for performing computer-aided characterization and automatic diagnosis of these tumors, using only the information from ultrasound images. In this paper, we considered some of the most frequent abdominal malignant tumors: the hepatocellular carcinoma and the colonic tumors. We compared these structures with the benign tumors and with other visually similar diseases. Besides the textural features that proved in our previous research to be useful in the characterization and recognition of the malignant tumors, we improved our method by using the grey level cooccurrence matrix and the edge orientation cooccurrence matrix of superior order. As resulted from our experiments, the new textural features increased the malignant tumor classification performance, also revealing visual and physical properties of these structures that emphasized the complex, chaotic structure of the corresponding tissue. PMID:22312411

  15. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  16. The impact of inverted text on visual word processing: An fMRI study.

    PubMed

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

    PubMed Central

    Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly

    2017-01-01

    Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550

  18. Distinct spatio-temporal profiles of beta-oscillations within visual and sensorimotor areas during action recognition as revealed by MEG.

    PubMed

    Pavlidou, Anastasia; Schnitzler, Alfons; Lange, Joachim

    2014-05-01

    The neural correlates of action recognition have been widely studied in visual and sensorimotor areas of the human brain. However, the role of neuronal oscillations involved during the process of action recognition remains unclear. Here, we were interested in how the plausibility of an action modulates neuronal oscillations in visual and sensorimotor areas. Subjects viewed point-light displays (PLDs) of biomechanically plausible and implausible versions of the same actions. Using magnetoencephalography (MEG), we examined dynamic changes of oscillatory activity during these action recognition processes. While both actions elicited oscillatory activity in visual and sensorimotor areas in several frequency bands, a significant difference was confined to the beta-band (∼20 Hz). An increase of power for plausible actions was observed in left temporal, parieto-occipital and sensorimotor areas of the brain, in the beta-band in successive order between 1650 and 2650 msec. These distinct spatio-temporal beta-band profiles suggest that the action recognition process is modulated by the degree of biomechanical plausibility of the action, and that spectral power in the beta-band may provide a functional interaction between visual and sensorimotor areas in humans. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Neural differentiation of lexico-syntactic categories or semantic features? event-related potential evidence for both.

    PubMed

    Kellenbach, Marion L; Wijers, Albertus A; Hovius, Marjolijn; Mulder, Juul; Mulder, Gijsbertus

    2002-05-15

    Event-related potentials (ERPs) were used to investigate whether processing differences between nouns and verbs can be accounted for by the differential salience of visual-perceptual and motor attributes in their semantic specifications. Three subclasses of nouns and verbs were selected, which differed in their semantic attribute composition (abstract, high visual, high visual and motor). Single visual word presentation with a recognition memory task was used. While multiple robust and parallel ERP effects were observed for both grammatical class and attribute type, there were no interactions between these. This pattern of effects provides support for lexical-semantic knowledge being organized in a manner that takes account both of category-based (grammatical class) and attribute-based distinctions.

  20. Right hemispheric dominance of visual phenomena evoked by intracerebral stimulation of the human visual cortex.

    PubMed

    Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis

    2014-07-01

    Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.

Top