The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.
Coco, Moreno I; Malcolm, George L; Keller, Frank
2014-01-01
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
The impact of attentional, linguistic, and visual features during object naming
Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank
2013-01-01
Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792
Implicit Object Naming in Visual Search: Evidence from Phonological Competition
Walenchok, Stephen C.; Hout, Michael C.; Goldinger, Stephen D.
2016-01-01
During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for one or three potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. PMID:27531018
Embodied attention and word learning by toddlers
Yu, Chen; Smith, Linda B.
2013-01-01
Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116
Visual discrimination predicts naming and semantic association accuracy in Alzheimer disease.
Harnish, Stacy M; Neils-Strunjas, Jean; Eliassen, James; Reilly, Jamie; Meinzer, Marcus; Clark, John Greer; Joseph, Jane
2010-12-01
Language impairment is a common symptom of Alzheimer disease (AD), and is thought to be related to semantic processing. This study examines the contribution of another process, namely visual perception, on measures of confrontation naming and semantic association abilities in persons with probable AD. Twenty individuals with probable mild-moderate Alzheimer disease and 20 age-matched controls completed a battery of neuropsychologic measures assessing visual perception, naming, and semantic association ability. Visual discrimination tasks that varied in the degree to which they likely accessed stored structural representations were used to gauge whether structural processing deficits could account for deficits in naming and in semantic association in AD. Visual discrimination abilities of nameable objects in AD strongly predicted performance on both picture naming and semantic association ability, but lacked the same predictive value for controls. Although impaired, performance on visual discrimination tests of abstract shapes and novel faces showed no significant relationship with picture naming and semantic association. These results provide additional evidence to support that structural processing deficits exist in AD, and may contribute to object recognition and naming deficits. Our findings suggest that there is a common deficit in discrimination of pictures using nameable objects, picture naming, and semantic association of pictures in AD. Disturbances in structural processing of pictured items may be associated with lexical-semantic impairment in AD, owing to degraded internal storage of structural knowledge.
When a Picasso is a "Picasso": the entry point in the identification of visual art.
Belke, B; Leder, H; Harsanyi, G; Carbon, C C
2010-02-01
We investigated whether art is distinguished from other real world objects in human cognition, in that art allows for a special memorial representation and identification based on artists' specific stylistic appearances. Testing art-experienced viewers, converging empirical evidence from three experiments, which have proved sensitive to addressing the question of initial object recognition, suggest that identification of visual art is at the subordinate level of the producing artist. Specifically, in a free naming task it was found that art-objects as opposed to non-art-objects were most frequently named with subordinate level categories, with the artist's name as the most frequent category (Experiment 1). In a category-verification task (Experiment 2), art-objects were recognized faster than non-art-objects on the subordinate level with the artist's name. In a conceptual priming task, subordinate primes of artists' names facilitated matching responses to art-objects but subordinate primes did not facilitate responses to non-art-objects (Experiment 3). Collectively, these results suggest that the artist's name has a special status in the memorial representation of visual art and serves as a predominant entry point in recognition in art perception. Copyright 2009 Elsevier B.V. All rights reserved.
Age-Related Deficits in Auditory Confrontation Naming
Hanna-Pladdy, Brenda; Choi, Hyun
2015-01-01
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. PMID:20677880
How Does Using Object Names Influence Visual Recognition Memory?
ERIC Educational Resources Information Center
Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel
2013-01-01
Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…
ERIC Educational Resources Information Center
Acres, K.; Taylor, K. I.; Moss, H. E.; Stamatakis, E. A.; Tyler, L. K.
2009-01-01
Cognitive neuroscientific research proposes complementary hemispheric asymmetries in naming and recognising visual objects, with a left temporal lobe advantage for object naming and a right temporal lobe advantage for object recognition. Specifically, it has been proposed that the left inferior temporal lobe plays a mediational role linking…
It's all connected: Pathways in visual object recognition and early noun learning.
Smith, Linda B
2013-11-01
A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.
2013-01-01
Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Kaga, Kimitaka; Shindo, Mitsuko
2012-04-01
The case of an 8-year-old girl who manifested cortical blindness and whose color drawings of faces and objects were without outlines is reported. Her birth was uneventful. When she was 10 months old, she fell down to the floor from a chair, resulting in a subarachnoidal hemorrhage. A repeat brain MRI revealed localized lesions in the visual cortices in the right and left hemispheres. As she grew older she was found to have visual imperceptions. She was found to have difficulties in learning visually the names of objects with form and letters, and in recognizing faces of her family. However, she was able to discriminate well the colors of faces and objects and learn easily the names of the objects with form by touching. She seemed to utilize subcortical vision for seeing colors of faces and objects.
The Role of Multiple-Exemplar Training and Naming in Establishing Derived Equivalence in an Infant
Luciano, Carmen; Becerra, Inmaculada Gómez; Valverde, Miguel Rodríguez
2007-01-01
The conditions under which symmetry and equivalence relations develop are still controversial. This paper reports three experiments that attempt to analyze the impact of multiple-exemplar training (MET) in receptive symmetry on the emergence of visual–visual equivalence relations with a very young child, Gloria. At the age of 15 months 24 days (15m24d), Gloria was tested for receptive symmetry and naming and showed no evidence of either repertoire. In the first experiment, MET in immediate and delayed receptive symmetrical responding or listener behavior (from object–sound to immediate and delayed sound–object selection) proceeded for one month with 10 different objects. This was followed, at 16m25d, by a second test conducted with six new objects. Gloria showed generalized receptive symmetry with a 3-hr delay; however no evidence of naming with new objects was found. Experiment 2 began at 17m with the aim of establishing derived visual–visual equivalence relations using a matching-to-sample format with two comparisons. Visual–visual equivalence responding emerged at 19m, although Gloria still had not shown evidence of naming. Experiment 3 (22m to 23m25d) used a three-comparison matching-to-sample procedure to establish visual–visual equivalence. Equivalence responding emerged as in Experiment 2, and naming emerged by the end of Experiment 3. Results are discussed in terms of the history of training in bidirectional relations responsible for the emergence of visual–visual equivalence relations and of their implications for current theories of stimulus equivalence. PMID:17575901
ERIC Educational Resources Information Center
Wolk, D.A.; Coslett, H.B.; Glosser, G.
2005-01-01
The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…
Coherence across consciousness levels: Symmetric visual displays spare working memory resources.
Dumitru, Magda L
2015-12-15
Two studies demonstrate that the need for coherence could nudge individuals to use structural similarities between binary visual displays and two concurrent cognitive tasks to unduly solve the latter in similar fashion. In an overt truth-judgement task, participants decided whether symmetric colourful displays matched conjunction or disjunction descriptions (e.g., "the black and/or the orange"). In the simultaneous covert categorisation task, they decided whether a colour name (e.g., "black") described a two-colour object or half of a single-colour object. Two response patterns emerged as follows. Participants either acknowledged or rejected matches between disjunction descriptions and two visual stimuli and, similarly, either acknowledged or rejected matches between single colour names and two-colour objects or between single colour names and half of single-colour objects. These findings confirm the coherence hypothesis, highlight the role of coherence in preserving working-memory resources, and demonstrate an interaction between high-level and low-level consciousness. Copyright © 2015 Elsevier Inc. All rights reserved.
Predictors of photo naming: Dutch norms for 327 photos.
Shao, Zeshu; Stiegert, Julia
2016-06-01
In the present study, we report naming latencies and norms for 327 photos of objects in Dutch. We provide norms for eight psycholinguistic variables: age of acquisition, familiarity, imageability, image agreement, objective and subjective visual complexity, word frequency, word length in syllables and letters, and name agreement. Furthermore, multiple regression analyses revealed that the significant predictors of photo-naming latencies were name agreement, word frequency, imageability, and image agreement. The naming latencies, norms, and stimuli are provided as supplemental materials.
Eye movements during object recognition in visual agnosia.
Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe
2012-07-01
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.
Adlington, Rebecca L; Laws, Keith R; Gale, Tim M
2009-10-01
It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.
Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G
2000-06-01
Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.
Walla, Peter; Brenner, Gerhard; Koller, Monika
2011-01-01
With this study we wanted to test the hypothesis that individual like and dislike as occurring in relation to brand attitude can be objectively assessed. First, individuals rated common brands with respect to subjective preference. Then, they volunteered in an experiment during which their most liked and disliked brand names were visually presented while three different objective measures were taken. Participant's eye blinks as responses to acoustic startle probes were registered with electromyography (EMG) (i) and their skin conductance (ii) and their heart rate (iii) were recorded. We found significantly reduced eye blink amplitudes related to liked brand names compared to disliked brand names. This finding suggests that visual perception of liked brand names elicits higher degrees of pleasantness, more positive emotion and approach-oriented motivation than visual perception of disliked brand names. Also, skin conductance and heart rate were both reduced in case of liked versus disliked brand names. We conclude that all our physiological measures highlight emotion-related differences depending on the like and dislike toward individual brands. We suggest that objective measures should be used more frequently to quantify emotion-related aspects of brand attitude. In particular, there might be potential interest to introduce startle reflex modulation to measure emotion-related impact during product development, product design and various further fields relevant to marketing. Our findings are discussed in relation to the idea that self reported measures are most often cognitively polluted. PMID:22073192
Walla, Peter; Brenner, Gerhard; Koller, Monika
2011-01-01
With this study we wanted to test the hypothesis that individual like and dislike as occurring in relation to brand attitude can be objectively assessed. First, individuals rated common brands with respect to subjective preference. Then, they volunteered in an experiment during which their most liked and disliked brand names were visually presented while three different objective measures were taken. Participant's eye blinks as responses to acoustic startle probes were registered with electromyography (EMG) (i) and their skin conductance (ii) and their heart rate (iii) were recorded. We found significantly reduced eye blink amplitudes related to liked brand names compared to disliked brand names. This finding suggests that visual perception of liked brand names elicits higher degrees of pleasantness, more positive emotion and approach-oriented motivation than visual perception of disliked brand names. Also, skin conductance and heart rate were both reduced in case of liked versus disliked brand names. We conclude that all our physiological measures highlight emotion-related differences depending on the like and dislike toward individual brands. We suggest that objective measures should be used more frequently to quantify emotion-related aspects of brand attitude. In particular, there might be potential interest to introduce startle reflex modulation to measure emotion-related impact during product development, product design and various further fields relevant to marketing. Our findings are discussed in relation to the idea that self reported measures are most often cognitively polluted.
Urooj, Uzma; Cornelissen, Piers L; Simpson, Michael I G; Wheat, Katherine L; Woods, Will; Barca, Laura; Ellis, Andrew W
2014-02-15
The age of acquisition (AoA) of objects and their names is a powerful determinant of processing speed in adulthood, with early-acquired objects being recognized and named faster than late-acquired objects. Previous research using fMRI (Ellis et al., 2006. Traces of vocabulary acquisition in the brain: evidence from covert object naming. NeuroImage 33, 958-968) found that AoA modulated the strength of BOLD responses in both occipital and left anterior temporal cortex during object naming. We used magnetoencephalography (MEG) to explore in more detail the nature of the influence of AoA on activity in those two regions. Covert object naming recruited a network within the left hemisphere that is familiar from previous research, including visual, left occipito-temporal, anterior temporal and inferior frontal regions. Region of interest (ROI) analyses found that occipital cortex generated a rapid evoked response (~75-200 ms at 0-40 Hz) that peaked at 95 ms but was not modulated by AoA. That response was followed by a complex of later occipital responses that extended from ~300 to 850 ms and were stronger to early- than late-acquired items from ~325 to 675 ms at 10-20 Hz in the induced rather than the evoked component. Left anterior temporal cortex showed an evoked response that occurred significantly later than the first occipital response (~100-400 ms at 0-10 Hz with a peak at 191 ms) and was stronger to early- than late-acquired items from ~100 to 300 ms at 2-12 Hz. A later anterior temporal response from ~550 to 1050 ms at 5-20 Hz was not modulated by AoA. The results indicate that the initial analysis of object forms in visual cortex is not influenced by AoA. A fastforward sweep of activation from occipital and left anterior temporal cortex then results in stronger activation of semantic representations for early- than late-acquired objects. Top-down re-activation of occipital cortex by semantic representations is then greater for early than late acquired objects resulting in delayed modulation of the visual response. Copyright © 2013 Elsevier Inc. All rights reserved.
Evidence for a Limited-Cascading Account of Written Word Naming
ERIC Educational Resources Information Center
Bonin, Patrick; Roux, Sebastien; Barry, Christopher; Canell, Laura
2012-01-01
We address the issue of how information flows within the written word production system by examining written object-naming latencies. We report 4 experiments in which we manipulate variables assumed to have their primary impact at the level of object recognition (e.g., quality of visual presentation of pictured objects), at the level of semantic…
Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording
Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi
2014-01-01
OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577
A Case for Inhibition: Visual Attention Suppresses the Processing of Irrelevant Objects
ERIC Educational Resources Information Center
Wuhr, Peter; Frings, Christian
2008-01-01
The present study investigated the ability to inhibit the processing of an irrelevant visual object while processing a relevant one. Participants were presented with 2 overlapping shapes (e.g., circle and square) in different colors. The task was to name the color of the relevant object designated by shape. Congruent or incongruent color words…
Semantic and visual determinants of face recognition in a prosopagnosic patient.
Dixon, M J; Bub, D N; Arguin, M
1998-05-01
Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.
[Visual Texture Agnosia in Humans].
Suzuki, Kyoko
2015-06-01
Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.
Visual object naming in patients with small lesions centered at the left temporopolar region.
Campo, Pablo; Poch, Claudia; Toledano, Rafael; Igoa, José Manuel; Belinchón, Mercedes; García-Morales, Irene; Gil-Nagel, Antonio
2016-01-01
Naming is considered a left hemisphere function that operates according to a posterior-anterior specificity gradient, with more fine-grained information processed in most anterior regions of the temporal lobe (ATL), including the temporal pole (TP). Word finding difficulties are typically assessed using visual confrontation naming tasks, and have been associated with selective damage to ATL resulting from different aetiologies. Nonetheless, the role of the ATL and, more specifically, of the TP in the naming network is not completely established. Most of the accumulated evidence is based on studies on patients with extensive lesions, often bilateral. Furthermore, there is a considerable variability in the anatomical definition of ATL. To better understand the specific involvement of the left TP in visual object naming, we assessed a group of patients with an epileptogenic lesion centered at the TP, and compared their performance with that of a strictly matched control group. We also administered a battery of verbal and non-verbal semantic tasks that was used as a semantic memory baseline. Patients showed an impaired naming ability, manifesting in a certain degree of anomia and semantically related naming errors, which was influenced by concept familiarity. This pattern took place in a context of mild semantic dysfunction that was evident in different types and modalities of semantic tasks. Therefore, current findings demonstrate that a restricted lesion to the left TP can cause a significant deficit in object naming. Of importance, the observed semantic impairment was far from the devastating degradation observed in semantic dementia and other bilateral conditions.
Age effects on visual-perceptual processing and confrontation naming.
Gutherie, Audrey H; Seely, Peter W; Beacham, Lauren A; Schuchard, Ronald A; De l'Aune, William A; Moore, Anna Bacon
2010-03-01
The impact of age-related changes in visual-perceptual processing on naming ability has not been reported. The present study investigated the effects of 6 levels of spatial frequency and 6 levels of contrast on accuracy and latency to name objects in 14 young and 13 older neurologically normal adults with intact lexical-semantic functioning. Spatial frequency and contrast manipulations were made independently. Consistent with the hypotheses, variations in these two visual parameters impact naming ability in young and older subjects differently. The results from the spatial frequency-manipulations revealed that, in general, young vs. older subjects are faster and more accurate to name. However, this age-related difference is dependent on the spatial frequency on the image; differences were only seen for images presented at low (e.g., 0.25-1 c/deg) or high (e.g., 8-16 c/deg) spatial frequencies. Contrary to predictions, the results from the contrast manipulations revealed that overall older vs. young adults are more accurate to name. Again, however, differences were only seen for images presented at the lower levels of contrast (i.e., 1.25%). Both age groups had shorter latencies on the second exposure of the contrast-manipulated images, but this possible advantage of exposure was not seen for spatial frequency. Category analyses conducted on the data from this study indicate that older vs. young adults exhibit a stronger nonliving-object advantage for naming spatial frequency-manipulated images. Moreover, the findings suggest that bottom-up visual-perceptual variables integrate with top-down category information in different ways. Potential implications on the aging and naming (and recognition) literature are discussed.
The evolution of meaning: spatio-temporal dynamics of visual object recognition.
Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K
2011-08-01
Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.
Orientation priming of grasping decision for drawings of objects and blocks, and words.
Chainay, Hanna; Naouri, Lucie; Pavec, Alice
2011-05-01
This study tested the influence of orientation priming on grasping decisions. Two groups of 20 healthy participants had to select a preferred grasping orientation (horizontal, vertical) based on drawings of everyday objects, geometric blocks or object names. Three priming conditions were used: congruent, incongruent and neutral. The facilitating effects of priming were observed in the grasping decision task for drawings of objects and blocks but not object names. The visual information about congruent orientation in the prime quickened participants' responses but had no effect on response accuracy. The results are discussed in the context of the hypothesis that an object automatically potentiates grasping associated with it, and that the on-line visual information is necessary for grasping potentiation to occur. The possibility that the most frequent orientation of familiar objects might be included in object-action representation is also discussed.
Lloyd-Jones, Toby J; Nakabayashi, Kazuyo
2014-01-01
Using a novel paradigm to engage the long-term mappings between object names and the prototypical colors for objects, we investigated the retrieval of object-color knowledge as indexed by long-term priming (the benefit in performance from a prior encounter with the same or a similar stimulus); a process about which little is known. We examined priming from object naming on a lexical-semantic matching task. In the matching task participants encountered a visually presented object name (Experiment 1) or object shape (Experiment 2) paired with either a color patch or color name. The pairings could either match whereby both were consistent with a familiar object (e.g., strawberry and red) or mismatch (strawberry and blue). We used the matching task to probe knowledge about familiar objects and their colors pre-activated during object naming. In particular, we examined whether the retrieval of object-color information was modality-specific and whether this influenced priming. Priming varied with the nature of the retrieval process: object-color priming arose for object names but not object shapes and beneficial effects of priming were observed for color patches whereas inhibitory priming arose with color names. These findings have implications for understanding how object knowledge is retrieved from memory and modified by learning.
Picture Detection in Rapid Serial Visual Presentation: Features or Identity?
ERIC Educational Resources Information Center
Potter, Mary C.; Wyble, Brad; Pandav, Rijuta; Olejarczyk, Jennifer
2010-01-01
A pictured object can be readily detected in a rapid serial visual presentation sequence when the target is specified by a superordinate category name such as "animal" or "vehicle". Are category features the initial basis for detection, with identification of the specific object occurring in a second stage (Evans &…
What Is the Unit of Visual Attention? Object for Selection, but Boolean Map for Access
ERIC Educational Resources Information Center
Huang, Liqiang
2010-01-01
In the past 20 years, numerous theories and findings have suggested that the unit of visual attention is the object. In this study, I first clarify 2 different meanings of unit of visual attention, namely the unit of access in the sense of measurement and the unit of selection in the sense of division. In accordance with this distinction, I argue…
Intracerebral stimulation of left and right ventral temporal cortex during object naming.
Bédos Ulvin, Line; Jonas, Jacques; Brissart, Hélène; Colnat-Coulbois, Sophie; Thiriaux, Anne; Vignal, Jean-Pierre; Maillard, Louis
2017-12-01
While object naming is traditionally considered asa left hemisphere function, neuroimaging studies have reported activations related to naming in the ventral temporal cortex (VTC) bilaterally. Our aim was to use intracerebral electrical stimulation to specifically compare left and right VTC in naming. In twenty-three epileptic patients tested for visual object naming during stimulation, the proportion of naming impairments was significantly higher in the left than in the right VTC (31.3% vs 13.6%). The highest proportions of positive naming sites were found in the left fusiform gyrus and occipito-temporal sulcus (47.5% and 31.8%). For 17 positive left naming sites, an additional semantic picture matching was carried out, always successfully performed. Our results showed the enhanced role of the left compared to the right VTC in naming and suggest that it may be involved in lexical retrieval rather than in semantic processing. Copyright © 2017 Elsevier Inc. All rights reserved.
Lloyd-Jones, Toby J.; Nakabayashi, Kazuyo
2014-01-01
Using a novel paradigm to engage the long-term mappings between object names and the prototypical colors for objects, we investigated the retrieval of object-color knowledge as indexed by long-term priming (the benefit in performance from a prior encounter with the same or a similar stimulus); a process about which little is known. We examined priming from object naming on a lexical-semantic matching task. In the matching task participants encountered a visually presented object name (Experiment 1) or object shape (Experiment 2) paired with either a color patch or color name. The pairings could either match whereby both were consistent with a familiar object (e.g., strawberry and red) or mismatch (strawberry and blue). We used the matching task to probe knowledge about familiar objects and their colors pre-activated during object naming. In particular, we examined whether the retrieval of object-color information was modality-specific and whether this influenced priming. Priming varied with the nature of the retrieval process: object-color priming arose for object names but not object shapes and beneficial effects of priming were observed for color patches whereas inhibitory priming arose with color names. These findings have implications for understanding how object knowledge is retrieved from memory and modified by learning. PMID:25009522
Visual naming deficits in dyslexia: An ERP investigation of different processing domains.
Araújo, Susana; Faísca, Luís; Reis, Alexandra; Marques, J Frederico; Petersson, Karl Magnus
2016-10-01
Naming speed deficits are well documented in developmental dyslexia, expressed by slower naming times and more errors in response to familiar items. Here we used event-related potentials (ERPs) to examine at what processing level the deficits in dyslexia emerge during a discrete-naming task. Dyslexic and skilled adult control readers performed a primed object-naming task, in which the relationship between the prime and the target was manipulated along perceptual, semantic and phonological dimensions. A 3×2 design that crossed Relationship Type (Visual, Phonemic Onset, and Semantic) with Relatedness (Related and Unrelated) was used. An attenuated N/P190 - indexing early visual processing - and N300 - which index late visual processing - was observed to pictures preceded by perceptually related (vs. unrelated) primes in the control but not in the dyslexic group. These findings suggest suboptimal processing in early stages of object processing in dyslexia, when integration and mapping of perceptual information to a more form-specific percept in memory take place. On the other hand, both groups showed an N400 effect associated with semantically related pictures (vs. unrelated), taken to reflect intact integration of semantic similarities in both dyslexic and control readers. We also found an electrophysiological effect of phonological priming in the N400 range - that is, an attenuated N400 to objects preceded by phonemic related primes vs. unrelated - while it showed a more widespread distributed and more pronounced over the right hemisphere in the dyslexics. Topographic differences between groups might have originated from a word form encoding process with different characteristics in dyslexics compared to control readers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
Near or far: The effect of spatial distance and vocabulary knowledge on word learning.
Axelsson, Emma L; Perry, Lynn K; Scott, Emilly J; Horst, Jessica S
2016-01-01
The current study investigated the role of spatial distance in word learning. Two-year-old children saw three novel objects named while the objects were either in close proximity to each other or spatially separated. Children were then tested on their retention for the name-object associations. Keeping the objects spatially separated from each other during naming was associated with increased retention for children with larger vocabularies. Children with a lower vocabulary size demonstrated better retention if they saw objects in close proximity to each other during naming. This demonstrates that keeping a clear view of objects during naming improves word learning for children who have already learned many words, but keeping objects within close proximal range is better for children at earlier stages of vocabulary acquisition. The effect of distance is therefore not equal across varying vocabulary sizes. The influences of visual crowding, cognitive load, and vocabulary size on word learning are discussed. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Harris, Irina M.; Murray, Alexandra M.; Hayward, William G.; O'Callaghan, Claire; Andrews, Sally
2012-01-01
We used repetition blindness to investigate the nature of the representations underlying identification of manipulable objects. Observers named objects presented in rapid serial visual presentation streams containing either manipulable or nonmanipulable objects. In half the streams, 1 object was repeated. Overall accuracy was lower when streams…
Horst, Jessica S; Hout, Michael C
2016-12-01
Many experimental research designs require images of novel objects. Here we introduce the Novel Object and Unusual Name (NOUN) Database. This database contains 64 primary novel object images and additional novel exemplars for ten basic- and nine global-level object categories. The objects' novelty was confirmed by both self-report and a lack of consensus on questions that required participants to name and identify the objects. We also found that object novelty correlated with qualifying naming responses pertaining to the objects' colors. The results from a similarity sorting task (and a subsequent multidimensional scaling analysis on the similarity ratings) demonstrated that the objects are complex and distinct entities that vary along several featural dimensions beyond simply shape and color. A final experiment confirmed that additional item exemplars comprised both sub- and superordinate categories. These images may be useful in a variety of settings, particularly for developmental psychology and other research in the language, categorization, perception, visual memory, and related domains.
ERIC Educational Resources Information Center
Richler, Jennifer J.; Gauthier, Isabel; Palmeri, Thomas J.
2011-01-01
Are there consequences of calling objects by their names? Lupyan (2008) suggested that overtly labeling objects impairs subsequent recognition memory because labeling shifts stored memory representations of objects toward the category prototype (representational shift hypothesis). In Experiment 1, we show that processing objects at the basic…
[Associative visual agnosia. The less visible consequences of a cerebral infarction].
Diesfeldt, H F A
2011-02-01
After a cerebral infarction, some patients acutely demonstrate contralateral hemiplegia, or aphasia. Those are the obvious symptoms of a cerebral infarction. However, less visible but burdensome consequences may go unnoticed without closer investigation. The importance of a thorough clinical examination is exemplified by a single case study of a 72-year-old, right-handed male. Two years before he had suffered from an ischemic stroke in the territory of the left posterior cerebral artery, with right homonymous hemianopia and global alexia (i.e., impairment in letter recognition and profound impairment of reading) without agraphia. Naming was impaired on visual presentation (20%-39% correct), but improved significantly after tactile presentation (87% correct) or verbal definition (89%). Pre-semantic visual processing was normal (correct matching of different views of the same object), as was his access to structural knowledge from vision (he reliably distinguished real objects from non-objects). On a colour decision task he reliably indicated which of two items was coloured correctly. Though he was unable to mime how visually presented objects were used, he more reliably matched pictures of objects with pictures of a mime artist gesturing the use of the object. He obtained normal scores on word definition (WAIS-III), synonym judgment and word-picture matching tasks with perceptual and semantic distractors. He however failed when he had to match physically dissimilar specimens of the same object or when he had to decide which two of five objects were related associatively (Pyramids and Palm Trees Test). The patient thus showed a striking contrast in his intact ability to access knowledge of object shape or colour from vision and impaired functional and associative knowledge. As a result, he could not access a complete semantic representation, required for activating phonological representations to name visually presented objects. The pattern of impairments and preserved abilities is considered to be a specific difficulty to access a full semantic representation from an intact structural representation of visually presented objects, i.e., a form of visual object agnosia.
Characteristic sounds facilitate visual search.
Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2008-06-01
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.
Shapes and sounds as self-objects in learning geography.
Baum, E A
1978-01-01
The pleasure which some children find in maps and map reading is manifold in origin. Children cathect patterns of configuration and color and derive joy from the visual mastery of these. This gratification is enhanced by the child's knowledge that the map represents something bigger than and external to itself. Likewise, some children take pleasure in the pronunciation of names themselves. The phonetic transcription of multisyllabic names is often a plearurable challenge. The vocalized name has its origin in the self, becomes barely external to self, and is self-monitored. Thus, in children both the configurations and the vocalizations associated with map reading have the properties of "self=objects" (Kohut, 1971). From the author's observation the delight which some children take in sounding out geographic names on a map may, in some instances, indicate pre-existing gratifying sound associations. Childish amusement in punning on cognomens may be an even greater stimulant for learning than visual configurations or artificial cognitive devices.
Tsaparina, Diana; Bonin, Patrick; Méot, Alain
2011-12-01
The aim of the present study was to provide Russian normative data for the Snodgrass and Vanderwart (Behavior Research Methods, Instruments, & Computers, 28, 516-536, 1980) colorized pictures (Rossion & Pourtois, Perception, 33, 217-236, 2004). The pictures were standardized on name agreement, image agreement, conceptual familiarity, imageability, and age of acquisition. Objective word frequency and objective visual complexity measures are also provided for the most common names associated with the pictures. Comparative analyses between our results and the norms obtained in other, similar studies are reported. The Russian norms may be downloaded from the Psychonomic Society supplemental archive.
Lateralized electrical brain activity reveals covert attention allocation during speaking.
Rommers, Joost; Meyer, Antje S; Praamstra, Peter
2017-01-27
Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers' eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers' covert attention allocation as they produced short utterances to describe pairs of objects (e.g., "dog and chair"). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200-350ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bilingual Control: Sequential Memory in Language Switching
ERIC Educational Resources Information Center
Declerck, Mathieu; Philipp, Andrea M.; Koch, Iring
2013-01-01
To investigate bilingual language control, prior language switching studies presented visual objects, which had to be named in different languages, typically indicated by a visual cue. The present study examined language switching of predictable responses by introducing a novel sequence-based language switching paradigm. In 4 experiments,…
Characteristic sounds facilitate visual search
Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2009-01-01
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253
McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.
2014-01-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436
McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J
2015-02-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.
A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.
Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W
2012-09-01
In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.
Hiraoka, Kotaro; Suzuki, Kyoko; Hirayama, Kazumi; Mori, Etsuro
2009-01-01
We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed. PMID:19996516
Hiraoka, Kotaro; Suzuki, Kyoko; Hirayama, Kazumi; Mori, Etsuro
2009-01-01
We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Wu, Helen C.; Nagasawa, Tetsuro; Brown, Erik C.; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi
2011-01-01
Objective We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. Methods We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Results Both tasks commonly elicited gamma-augmentation (maximally at 80–100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Conclusions Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. Significance The present study increases our understanding of the visual-language pathways. PMID:21498109
ERIC Educational Resources Information Center
Zannino, Gian Daniele; Perri, Roberta; Salamone, Giovanna; Di Lorenzo, Concetta; Caltagirone, Carlo; Carlesimo, Giovanni A.
2010-01-01
There is now a large body of evidence suggesting that color and photographic detail exert an effect on recognition of visually presented familiar objects. However, an unresolved issue is whether these factors act at the visual, the semantic or lexical level of the recognition process. In the present study, we investigated this issue by having…
ERIC Educational Resources Information Center
Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-01-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…
Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning
Yee, Meagan; Jones, Susan S.; Smith, Linda B.
2012-01-01
Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015
Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-01-01
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245
Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-05-24
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.
Programming Education with a Blocks-Based Visual Language for Mobile Application Development
ERIC Educational Resources Information Center
Mihci, Can; Ozdener, Nesrin
2014-01-01
The aim of this study is to assess the impact upon academic success of the use of a reference block-based visual programming tool, namely the MIT App Inventor for Android, as an educational instrument for teaching object-oriented GUI-application development (CS2) concepts to students; who have previously completed a fundamental programming course…
A novel visualization model for web search results.
Nguyen, Tien N; Zhang, Jin
2006-01-01
This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.
A cortical pathway to olfactory naming: evidence from primary progressive aphasia
Rogalski, Emily; Harrison, Theresa; Mesulam, M.-Marsel; Gottfried, Jay A.
2013-01-01
It is notoriously difficult to name odours. Without the benefit of non-olfactory information, even common household smells elude our ability to name them. The neuroscientific basis for this olfactory language ‘deficit’ is poorly understood, and even basic models to explain how odour inputs gain access to transmodal representations required for naming have not been put forward. This study used patients with primary progressive aphasia, a clinical dementia syndrome characterized by primary deficits in language, to investigate the interactions between olfactory inputs and lexical access by assessing behavioural performance of olfactory knowledge and its relationship to brain atrophy. We specifically hypothesized that the temporal pole would play a key role in linking odour object representations to transmodal networks, given its anatomical proximity to olfactory and visual object processing areas. Behaviourally, patients with primary progressive aphasia with non-semantic subtypes were severely impaired on an odour naming task, in comparison with an age-matched control group. However, with the availability of picture cues or word cues, odour matching performance approached control levels, demonstrating an inability to retrieve but not to recognize the name and nature of the odorant. The magnitude of cortical thinning in the temporal pole was found to correlate with reductions in odour familiarity and odour matching to visual cues, whereas the inferior frontal gyrus correlated with both odour naming and matching. Volumetric changes in the mediodorsal thalamus correlated with the proportion of categorical mismatch errors, indicating a possible role of this region in error-signal monitoring to optimize recognition of associations linked to the odour. A complementary analysis of patients with the semantic subtype of primary progressive aphasia, which is associated with marked temporopolar atrophy, revealed much more pronounced impairments of odour naming and matching. In identifying the critical role of the temporal pole and inferior frontal gyrus in transmodal linking and verbalization of olfactory objects, our findings provide a new neurobiological foundation for understanding why even common odours are hard to name. PMID:23471695
The bank of standardized stimuli (BOSS): comparison between French and English norms.
Brodeur, Mathieu B; Kehayia, Eva; Dion-Lessard, Geneviève; Chauret, Mélissa; Montreuil, Tina; Dionne-Dostie, Emmanuelle; Lepage, Martin
2012-12-01
Throughout the last decades, numerous picture data sets have been developed, such as the Snodgrass and Vanderwart (1980) set, and have been normalized for variables such as name and familiarity; however, due to cultural and linguistic differences, norms can vary from one country to another. The effect due specifically to culture has already been demonstrated by comparing samples from different countries where the same language is spoken. On the other hand, it is still not clear how differences between languages may affect norms. The present study explores this issue by collecting and comparing norms on names and many other features from French Canadian speakers and English Canadian speakers living in Montreal, who thus live in similar cultural environments. Norms were collected for the photos of objects from the Bank of Standardized Stimuli (BOSS) by asking participants to name the objects, to categorize them, and to rate their familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Names and ratings from the French speakers are available in Appendix A, available in the supplemental materials. The results show that most of the norms are comparable across linguistic groups and also that the ratings given are correlated across linguistic groups. The only significant group differences were found in viewpoint agreement and visual complexity. Overall, there was good concordance between the norms collected from French and English native speakers living in the same cultural setting.
Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292
Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
The representation of semantic knowledge in a child with Williams syndrome.
Robinson, Sally J; Temple, Christine M
2009-05-01
This study investigated whether there are distinct types of semantic knowledge with distinct representational bases during development. The representation of semantic knowledge in a teenage child (S.T.) with Williams syndrome was explored for the categories of animals, fruit, and vegetables, manipulable objects, and nonmanipulable objects. S.T.'s lexical stores were of a normal size but the volume of "sensory feature" semantic knowledge she generated in oral descriptions was reduced. In visual recognition decisions, S.T. made more false positives to nonitems than did controls. Although overall naming of pictures was unimpaired, S.T. exhibited a category-specific anomia for nonmanipulable objects and impaired naming of visual-feature descriptions of animals. S.T.'s performance was interpreted as reflecting the impaired integration of distinctive features from perceptual input, which may impact upon nonmanipulable objects to a greater extent than the other knowledge categories. Performance was used to inform adult-based models of semantic representation, with category structure proposed to emerge due to differing degrees of dependency upon underlying knowledge types, feature correlations, and the acquisition of information from modality-specific processing modules.
Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon
2016-03-01
According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.
Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237
Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis.
Sapkota, Raju P; Pardhan, Shahina; van der Linde, Ian
2016-08-01
Visual short-term memory (VSTM) is a limited-capacity system that holds a small number of objects online simultaneously, implying that competition for limited storage resources occurs (Phillips, 1974). How the spatial and temporal proximity of stimuli affects this competition is unclear. In this 2-experiment study, we examined the effect of the spatial and temporal separation of real-world memory targets and erroneously selected nontarget items examined during location-recognition and object-recall tasks. In Experiment 1 (the location-recognition task), our test display comprised either the picture or name of 1 previously examined memory stimulus (rendered above as the stimulus-display area), together with numbered square boxes at each of the memory-stimulus locations used in that trial. Participants were asked to report the number inside the square box corresponding to the location at which the cued object was originally presented. In Experiment 2 (the object-recall task), the test display comprised a single empty square box presented at 1 memory-stimulus location. Participants were asked to report the name of the object presented at that location. In both experiments, nontarget objects that were spatially and temporally proximal to the memory target were confused more often than nontarget objects that were spatially and temporally distant (i.e., a spatiotemporal proximity effect); this effect generalized across memory tasks, and the object feature (picture or name) that cued the test-display memory target. Our findings are discussed in terms of spatial and temporal confusion "fields" in VSTM, wherein objects occupy diffuse loci in a spatiotemporal coordinate system, wherein neighboring locations are more susceptible to confusion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Space-based visual attention: a marker of immature selective attention in toddlers?
Rivière, James; Brisson, Julie
2014-11-01
Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.
Effects of Perceptual and Contextual Enrichment on Visual Confrontation Naming in Adult Aging
Rogalski, Yvonne; Peelle, Jonathan E.; Reilly, Jamie
2013-01-01
Purpose The purpose of this study was to determine the effects of enriching line drawings with color/texture and environmental context as a facilitator of naming speed and accuracy in older adults. Method Twenty young and 23 older adults named high-frequency picture stimuli from the Boston Naming Test (Kaplan, Goodglass, & Weintraub, 2001) under three conditions: (a) black-and-white items, (b) colorized-texturized items, and (c) scene-primed colored items (e.g., “hammock” preceded 1,000 ms by a backyard scene). Results With respect to speeded naming latencies, mixed-model analyses of variance revealed that young adults did not benefit from colorization-texturization but did show scene-priming effects. In contrast, older adults failed to show facilitation effects from either colorized-texturized or scene-primed items. Moreover, older adults were consistently slower to initiate naming than were their younger counterparts across all conditions. Conclusions Perceptual and contextual enrichment of sparse line drawings does not appear to facilitate visual confrontation naming in older adults, whereas younger adults do tend to show benefits of scene priming. We interpret these findings as generally supportive of a processing speed account of age-related object picture-naming difficulty. PMID:21498581
[The Visual Association Test to study episodic memory in clinical geriatric psychology].
Diesfeldt, Han; Prins, Marleen; Lauret, Gijs
2018-04-01
The Visual Association Test (VAT) is a brief learning task that consists of six line drawings of pairs of interacting objects (association cards). Subjects are asked to name or identify each object and later are presented with one object from the pair (the cue) and asked to name the other (the target). The VAT was administered in a consecutive sample of 174 psychogeriatric day care participants with mild to major neurocognitive disorder. Comparison of test performance with normative data from non-demented subjects revealed that 69% scored within the range of a major deficit (0-8 over two recall trials), 14% a minor, and 17% no deficit (9-10, and ≥10 respectively).VAT-scores correlated with another test of memory function, the Cognitive Screening Test (CST), based on the Short Portable Mental Status Questionnaire (r = 0.53). Tests of executive functioning (Expanded Mental Control Test, Category Fluency, Clock Drawing) did not add significantly to the explanation of variance in VAT-scores.Fifty-five participants (31.6%) were faced with initial problems in naming or identifying one or more objects on the cue cards or association cards. If necessary, naming was aided by the investigator. Initial difficulties in identifying cue objects were associated with lower VAT-scores, but this did not hold for difficulties in identifying target objects.A hierarchical multiple regression analysis was used to examine whether linear or quadratic trends best fitted VAT performance across the range of CST scores. The regression model revealed a linear but not a quadratic trend. The best fitting linear model implied that VAT scores differentiated between CST scores in the lower, as well as in the upper range, indicating the absence of floor and ceiling effects, respectively. Moreover, the VAT compares favourably to word list-learning tasks being more attractive in its presentation of interacting visual objects and cued recall based on incidental learning of the association between cues and targets.For practical purposes and based on documented sensitivity and specificity, Bayesian probability tables give predictive power of age-specific VAT cutoff scores for the presence or absence of a major neurocognitive disorder across a range of a priori probabilities or base rates.
Observational Word Learning in Two Bonobos ("Pan Panicus"): Ostensive and Non-Ostensive Contexts.
ERIC Educational Resources Information Center
Lyn, Heidi; Savage-Rumbaugh, E. Sue
2000-01-01
Using a modified human paradigm, this article explores two language-competent bonobos' abilities to map new words to objects in realistic surroundings with few exposures to the referents. Also investigates the necessity of the apes maintaining visual contact with the item to map the novel name onto the novel object. (Author/VWL)
Carnaghi, Andrea; Mitrovic, Aleksandra; Leder, Helmut; Fantoni, Carlo; Silani, Giorgia
2018-01-01
A controversial hypothesis, named the Sexualized Body Inversion Hypothesis (SBIH), claims similar visual processing of sexually objectified women (i.e., with a focus on the sexual body parts) and inanimate objects as indicated by an absence of the inversion effect for both type of stimuli. The current study aims at shedding light into the mechanisms behind the SBIH in a series of 4 experiments. Using a modified version of Bernard et al.´s (2012) visual-matching task, first we tested the core assumption of the SBIH, namely that a similar processing style occurs for sexualized human bodies and objects. In Experiments 1 and 2 a non-sexualized (personalized) condition plus two object-control conditions (mannequins, and houses) were included in the experimental design. Results showed an inversion effect for images of personalized women and mannequins, but not for sexualized women and houses. Second, we explored whether this effect was driven by differences in stimulus asymmetry, by testing the mediating and moderating role of this visual feature. In Experiment 3, we provided the first evidence that not only the sexual attributes of the images but also additional perceptual features of the stimuli, such as their asymmetry, played a moderating role in shaping the inversion effect. Lastly, we investigated the strategy adopted in the visual-matching task by tracking eye movements of the participants. Results of Experiment 4 suggest an association between a specific pattern of visual exploration of the images and the presence of the inversion effect. Findings are discussed with respect to the literature on sexual objectification. PMID:29621249
Classification of visual and linguistic tasks using eye-movement features.
Coco, Moreno I; Keller, Frank
2014-03-07
The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).
Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco
2015-10-15
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.
Disentangling visual imagery and perception of real-world objects
Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.
2011-01-01
During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738
Picture agnosia as a characteristic of posterior cortical atrophy.
Sugimoto, Azusa; Midorikawa, Akira; Koyama, Shinichi; Futamura, Akinori; Hieda, Sotaro; Kawamura, Mitsuru
2012-01-01
Posterior cortical atrophy (PCA) is a degenerative disease characterized by progressive visual agnosia with posterior cerebral atrophy. We examine the role of the picture naming test and make a number of suggestions with regard to diagnosing PCA as atypical dementia. We investigated 3 cases of early-stage PCA with 7 control cases of Alzheimer disease (AD). The patients and controls underwent a naming test with real objects and colored photographs of familiar objects. We then compared rates of correct answers. Patients with early-stage PCA showed significant inability to recognize photographs compared to real objects (F = 196.284, p = 0.0000) as measured by analysis of variants. This difficulty was also significant to AD controls (F = 58.717, p = 0.0000). Picture agnosia is a characteristic symptom of early-stage PCA, and the picture naming test is useful for the diagnosis of PCA as atypical dementia at an early stage. Copyright © 2012 S. Karger AG, Basel.
Reilly, Jamie; Garcia, Amanda; Binney, Richard J.
2016-01-01
Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210
Gardini, Simona; Venneri, Annalena; Sambataro, Fabio; Cuetos, Fernando; Fasano, Fabrizio; Marchi, Massimo; Crisi, Girolamo; Caffarra, Paolo
2015-01-01
Semantic memory decline and changes of default mode network (DMN) connectivity have been reported in mild cognitive impairment (MCI). Only a few studies, however, have investigated the role of changes of activity in the DMN on semantic memory in this clinical condition. The present study aimed to investigate more extensively the relationship between semantic memory impairment and DMN intrinsic connectivity in MCI. Twenty-one MCI patients and 21 healthy elderly controls matched for demographic variables took part in this study. All participants underwent a comprehensive semantic battery including tasks of category fluency, visual naming and naming from definition for objects, actions and famous people, word-association for early and late acquired words and reading. A subgroup of the original sample (16 MCI patients and 20 healthy elderly controls) was also scanned with resting state functional magnetic resonance imaging and DMN connectivity was estimated using a seed-based approach. Compared with healthy elderly, patients showed an extensive semantic memory decline in category fluency, visual naming, naming from definition, words-association, and reading tasks. Patients presented increased DMN connectivity between the medial prefrontal regions and the posterior cingulate and between the posterior cingulate and the parahippocampus and anterior hippocampus. MCI patients also showed a significant negative correlation of medial prefrontal gyrus connectivity with parahippocampus and posterior hippocampus and visual naming performance. Our findings suggest that increasing DMN connectivity may contribute to semantic memory deficits in MCI, specifically in visual naming. Increased DMN connectivity with posterior cingulate and medio-temporal regions seems to represent a maladaptive reorganization of brain functions in MCI, which detrimentally contributes to cognitive impairment in this clinical population.
When concepts lose their color: A case of object color knowledge impairment
Stasenko, Alena; Garcea, Frank E.; Dombovy, Mary; Mahon, Bradford Z.
2014-01-01
Color is important in our daily interactions with objects, and plays a role in both low- and high-level visual processing. Previous neuropsychological studies have shown that color perception and object-color knowledge can doubly dissociate, and that both can dissociate from processing of object form. We present a case study of an individual who displayed an impairment for knowledge of the typical colors of objects, with preserved color perception and color naming. Our case also presented with a pattern of, if anything, worse performance for naming living items compared to nonliving things. The findings of the experimental investigation are evaluated in light of two theories of conceptual organization in the brain: the Sensory Functional Theory and the Domain-Specific Hypothesis. The dissociations observed in this case compel a model in which sensory/motor modality and semantic domain jointly constrain the organization of object knowledge. PMID:25058612
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
Making the invisible visible: verbal but not visual cues enhance visual detection.
Lupyan, Gary; Spivey, Michael J
2010-07-07
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Bonin, Patrick; Méot, Alain; Lagarrigue, Aurélie; Roux, Sébastien
2015-01-01
We report an investigation of cross-task comparisons of handwritten latencies in written object naming, spelling to dictation, and immediate copying. In three separate sessions, adults had to write down a list of concrete nouns from their corresponding pictures (written naming), from their spoken (spelling to dictation) and from their visual presentation (immediate copying). Linear mixed models without random slopes were performed on the latencies in order to study and compare within-task fixed effects. By-participants random slopes were then included to investigate individual differences within and across tasks. Overall, the findings suggest that written naming, spelling to dictation, and copying all involve a lexical pathway, but that written naming relies on this pathway more than the other two tasks do. Only spelling to dictation strongly involves a nonlexical pathway. Finally, the analyses performed at the level of participants indicate that, depending on the type of task, the slower participants are more or less influenced by certain psycholinguistic variables.
The differential contributions of visual imagery constructs on autobiographical thinking.
Aydin, Cagla
2018-02-01
There is a growing theoretical and empirical consensus on the central role of visual imagery in autobiographical memory. However, findings from studies that explore how individual differences in visual imagery are reflected on autobiographical thinking do not present a coherent story. One reason for the mixed findings was suggested to be the treatment of visual imagery as an undifferentiated construct while evidence shows that there is more than one type of visual imagery. The present study investigates the relative contributions of different imagery constructs; namely, object and spatial imagery, on autobiographical memory processes. Additionally, it explores whether a similar relation extends to imagining the future. The results indicate that while object imagery was significantly correlated with several phenomenological characteristics, such as the level of sensory and perceptual details for past events - but not for future events - spatial imagery predicted the level of episodic specificity for both past and future events. We interpret these findings as object imagery being recruited in tasks of autobiographical memory that employ reflective processes while spatial imagery is engaged during direct retrieval of event details. Implications for the role of visual imagery in autobiographical thinking processes are discussed.
ERIC Educational Resources Information Center
Clarke, A. J. Benjamin; Ludington, Jason D.
2018-01-01
Normative databases containing psycholinguistic variables are commonly used to aid stimulus selection for investigations into language and other cognitive processes. Norms exist for many languages, but not for Thai. The aim of the present research, therefore, was to obtain Thai normative data for the BOSS, a set of 480 high resolution color…
Ambrosini, Ettore; Costantini, Marcello
2017-02-01
Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization.
Yao, Hantao; Zhang, Shiliang; Yan, Chenggang; Zhang, Yongdong; Li, Jintao; Tian, Qi
Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.
Moreno-Martínez, Francisco Javier; Montoro, Pedro R
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.
Object shape and orientation do not routinely influence performance during language processing.
Rommers, Joost; Meyer, Antje S; Huettig, Falk
2013-11-01
The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.
Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection
Lupyan, Gary; Spivey, Michael J.
2010-01-01
Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646
Hallucinators find meaning in noises: pareidolic illusions in dementia with Lewy bodies.
Yokoi, Kayoko; Nishio, Yoshiyuki; Uchiyama, Makoto; Shimomura, Tatsuo; Iizuka, Osamu; Mori, Etsuro
2014-04-01
By definition, visual illusions and hallucinations differ in whether the perceived objects exist in reality. A recent study challenged this dichotomy, in which pareidolias, a type of complex visual illusion involving ambiguous forms being perceived as meaningful objects, are very common and phenomenologically similar to visual hallucinations in dementia with Lewy bodies (DLB). We hypothesise that a common psychological mechanism exists between pareidolias and visual hallucinations in DLB that confers meaning upon meaningless visual information. Furthermore, we believe that these two types of visual misperceptions have a common underlying neural mechanism, namely, cholinergic insufficiency. The current study investigated pareidolic illusions using meaningless visual noise stimuli (the noise pareidolia test) in 34 patients with DLB, 34 patients with Alzheimer׳s disease and 28 healthy controls. Fifteen patients with DLB were administered the noise pareidolia test twice, before and after donepezil treatment. Three major findings were discovered: (1) DLB patients saw meaningful illusory images (pareidolias) in meaningless visual stimuli, (2) the number of pareidolic responses correlated with the severity of visual hallucinations, and (3) cholinergic enhancement reduced both the number of pareidolias and the severity of visual hallucinations in patients with DLB. These findings suggest that a common underlying psychological and neural mechanism exists between pareidolias and visual hallucinations in DLB. Copyright © 2014 Elsevier Ltd. All rights reserved.
Computational model for perception of objects and motions.
Yang, WenLu; Zhang, LiQing; Ma, LiBo
2008-06-01
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.
Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-08-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.
Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).
Stephan, Claudia; Steurer, Michael M; Aust, Ulrike
2014-08-01
The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.
An interactive visualization tool for mobile objects
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuo
Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.
To call a cloud 'cirrus': sound symbolism in names for categories or items.
Ković, Vanja; Sučević, Jelena; Styles, Suzy J
2017-01-01
The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.
Masuda, Takahiko; Ishii, Keiko; Miwa, Koji; Rashid, Marghalara; Lee, Hajin; Mahdi, Rania
2017-01-01
Recent findings have re-examined the linguistic influence on cognition and perception, while identifying evidence that supports the Whorfian hypothesis. We examine how English and Japanese speakers perceive similarity of pairs of objects, by using two sets of stimuli: one in which two distinct linguistic categories apply to respective object images in English, but only one linguistic category applies in Japanese; and another in which two distinct linguistic categories apply to respective object images in Japanese, but only one applies in English. We conducted four studies and tested different groups of participants in each of them. In Study 1, we asked participants to name the two objects before engaging in the similarity judgment task. Here, we expected a strong linguistic effect. In Study 2, we asked participants to engage in the same task without naming, where we assumed that the condition is close enough to our daily visual information processing where language is not necessarily prompted. We further explored whether the language still influences the similarity perception by asking participants to engage in the same task basing on the visual similarity (Study 3) and the functional similarity (Study 4). The results overall indicated that English and Japanese speakers perceived the two objects to be more similar when they were in the same linguistic categories than when they were in different linguistic categories in their respective languages. Implications for research testing the Whorfian hypothesis and the requirement for methodological development beyond behavioral measures are discussed. PMID:29018375
Posture Affects How Robots and Infants Map Words to Objects
Morse, Anthony F.; Benitez, Viridian L.; Belpaeme, Tony; Cangelosi, Angelo; Smith, Linda B.
2015-01-01
For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space. PMID:25785834
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
Real-world visual statistics and infants' first-learned object names
Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen
2017-01-01
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Kobayashi, Tomoka; Inagaki, Masumi; Yamazaki, Hiroko; Kita, Yosuke; Kaga, Makiko; Oka, Akira
2014-11-01
Developmental dyslexia (DD) is a neurodevelopmental disorder that is characterized by difficulties with accurate and/or fluent word recognition and by poor spelling and decoding abilities. The magnocellular deficit theory is one of several hypotheses that have been proposed to explain the pathophysiology of DD. In this study, we investigated magnocellular system dysfunction in Japanese dyslexic children. Subjects were 19 dyslexic children (DD group) and 19 aged-matched healthy children (TD group). They were aged between 7 and 16 years. Reversed patterns of black and white sinusoidal gratings generated at a low spatial frequency, high reversal frequency of 7.5 Hz, and low contrasts were used specifically to stimulate the magnocellular system. We recorded visual evoked potentials (VEP) from the occipital area and examined their relationship with reading and naming tasks, such as the time to read hiragana characters, rapid automatized naming of pictured objects, and phonological manipulation. Compared to the TD group, the DD group showed a significantly lower peak amplitude of VEPs through the complex demodulation method. Structural equation modeling showed that VEP peak amplitudes were related to the rapid automatized naming of pictured objects, and better rapid automatized naming resulted in higher reading skills. There was no correlation between VEP findings and the capacity for phonological manipulation. VEPs in response to the magnocellular system are useful for understanding the pathophysiology of DD. Single phonological deficit may not be sufficient to cause DD.
Cant, Jonathan S; Xu, Yaoda
2017-02-01
Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.
Objects predict fixations better than early saliency.
Einhäuser, Wolfgang; Spain, Merrielle; Perona, Pietro
2008-11-20
Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L; Hanrahan, Patrick
2014-04-29
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2011-02-01
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2012-03-20
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Drane, Daniel L.; Loring, David W.; Voets, Natalie L.; Price, Michele; Ojemann, Jeffrey G.; Willie, Jon T.; Saindane, Amit M.; Phatak, Vaishali; Ivanisevic, Mirjana; Millis, Scott; Helmers, Sandra L.; Miller, John W.; Meador, Kimford J.; Gross, Robert E.
2015-01-01
SUMMARY OBJECTIVES Temporal lobe epilepsy (TLE) patients experience significant deficits in category-related object recognition and naming following standard surgical approaches. These deficits may result from a decoupling of core processing modules (e.g., language, visual processing, semantic memory), due to “collateral damage” to temporal regions outside the hippocampus following open surgical approaches. We predicted stereotactic laser amygdalohippocampotomy (SLAH) would minimize such deficits because it preserves white matter pathways and neocortical regions critical for these cognitive processes. METHODS Tests of naming and recognition of common nouns (Boston Naming Test) and famous persons were compared with nonparametric analyses using exact tests between a group of nineteen patients with medically-intractable mesial TLE undergoing SLAH (10 dominant, 9 nondominant), and a comparable series of TLE patients undergoing standard surgical approaches (n=39) using a prospective, non-randomized, non-blinded, parallel group design. RESULTS Performance declines were significantly greater for the dominant TLE patients undergoing open resection versus SLAH for naming famous faces and common nouns (F=24.3, p<.0001, η2=.57, & F=11.2, p<.001, η2=.39, respectively), and for the nondominant TLE patients undergoing open resection versus SLAH for recognizing famous faces (F=3.9, p<.02, η2=.19). When examined on an individual subject basis, no SLAH patients experienced any performance declines on these measures. In contrast, 32 of the 39 undergoing standard surgical approaches declined on one or more measures for both object types (p<.001, Fisher’s exact test). Twenty-one of 22 left (dominant) TLE patients declined on one or both naming tasks after open resection, while 11 of 17 right (non-dominant) TLE patients declined on face recognition. SIGNIFICANCE Preliminary results suggest 1) naming and recognition functions can be spared in TLE patients undergoing SLAH, and 2) the hippocampus does not appear to be an essential component of neural networks underlying name retrieval or recognition of common objects or famous faces. PMID:25489630
Khwaileh, Tariq; Mustafawi, Eiman; Herbert, Ruth; Howard, David
2018-02-15
Standardized pictorial stimuli and predictors of successful picture naming are not readily available for Gulf Arabic. On the basis of data obtained from Qatari Arabic, a variety of Gulf Arabic, the present study provides norms for a set of 319 object pictures and a set of 141 action pictures. Norms were collected from healthy speakers, using a picture-naming paradigm and rating tasks. Norms for naming latencies, name agreement, visual complexity, image agreement, imageability, age of acquisition, and familiarity were established. Furthermore, the database includes other intrinsic factors, such as syllable length and phoneme length. It also includes orthographic frequency values (extracted from Aralex; Boudelaa & Marslen-Wilson, 2010). These factors were then examined for their impact on picture-naming latencies in object- and action-naming tasks. The analysis showed that the primary determinants of naming latencies in both nouns and verbs are (in descending order) image agreement, name agreement, familiarity, age of acquisition, and imageability. These results indicate no evidence that noun- and verb-naming processes in Gulf Arabic are influenced in different ways by these variables. This is the first database for Gulf Arabic, and therefore the norms collected from the present study will be of paramount importance for researchers and clinicians working with speakers of this variety of Arabic. Due to the similarity of the Arabic varieties spoken in the Gulf, these different varieties are grouped together under the label "Gulf Arabic" in the literature. The normative databases and the standardized pictures from this study can be downloaded from http://qufaculty.qu.edu.qa/tariq-khwaileh/download-center/ .
Speakers of Different Languages Process the Visual World Differently
Chabal, Sarah; Marian, Viorica
2015-01-01
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct linguistic input, showing that language is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. PMID:26030171
The gender congruency effect during bilingual spoken-word recognition
Morales, Luis; Paolieri, Daniela; Dussias, Paola E.; Valdés kroff, Jorge R.; Gerfen, Chip; Bajo, María Teresa
2016-01-01
We investigate the ‘gender-congruency’ effect during a spoken-word recognition task using the visual world paradigm. Eye movements of Italian–Spanish bilinguals and Spanish monolinguals were monitored while they viewed a pair of objects on a computer screen. Participants listened to instructions in Spanish (encuentra la bufanda / ‘find the scarf’) and clicked on the object named in the instruction. Grammatical gender of the objects’ name was manipulated so that pairs of objects had the same (congruent) or different (incongruent) gender in Italian, but gender in Spanish was always congruent. Results showed that bilinguals, but not monolinguals, looked at target objects less when they were incongruent in gender, suggesting a between-language gender competition effect. In addition, bilinguals looked at target objects more when the definite article in the spoken instructions provided a valid cue to anticipate its selection (different-gender condition). The temporal dynamics of gender processing and cross-language activation in bilinguals are discussed. PMID:28018132
Further Development of Measures of Early Math Performance for Preschoolers
ERIC Educational Resources Information Center
VanDerHeyden, Amanda M.; Broussard, Carmen; Cooley, Amanda
2006-01-01
The purpose of this study was to examine the progress monitoring and screening accuracy for a set of curriculum-based measures (CBM) of early mathematics skills. Measures included counting objects, selecting numbers, naming numbers, counting, and visual discrimination. Measures were designed to be administered with preschoolers in a short period…
Colors in Mind: A Novel Paradigm to Investigate Pure Color Imagery
ERIC Educational Resources Information Center
Wantz, Andrea L.; Borst, Grégoire; Mast, Fred W.; Lobmaier, Janek S.
2015-01-01
Mental color imagery abilities are commonly measured using paradigms that involve naming, judging, or comparing the colors of visual mental images of well-known objects (e.g., "Is a sunflower darker yellow than a lemon"?). Although this approach is widely used in patient studies, differences in the ability to perform such color…
Bókkon, I; Salari, V; Tuszynski, J A; Antal, I
2010-09-02
Recently, we have proposed a redox molecular hypothesis about the natural biophysical substrate of visual perception and imagery [1,6]. Namely, the retina transforms external photon signals into electrical signals that are carried to the V1 (striatecortex). Then, V1 retinotopic electrical signals (spike-related electrical signals along classical axonal-dendritic pathways) can be converted into regulated ultraweak bioluminescent photons (biophotons) through redox processes within retinotopic visual neurons that make it possible to create intrinsic biophysical pictures during visual perception and imagery. However, the consensus opinion is to consider biophotons as by-products of cellular metabolism. This paper argues that biophotons are not by-products, other than originating from regulated cellular radical/redox processes. It also shows that the biophoton intensity can be considerably higher inside cells than outside. Our simple calculations, within a level of accuracy, suggest that the real biophoton intensity in retinotopic neurons may be sufficient for creating intrinsic biophysical picture representation of a single-object image during visual perception. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Moreno-Martínez, Francisco Javier; Montoro, Pedro R.
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object-and word- processing, both in neurological patients and in healthy controls. PMID:22662166
Electrophysiological correlates of retrieval orientation in reality monitoring.
Rosburg, Timm; Mecklinger, Axel; Johansson, Mikael
2011-02-14
Retrieval orientation describes the modulation in the processing of retrieval cues by the nature of the targeted material in memory. Retrieval orientation is usually investigated by analyzing the cortical responses to new (unstudied) material when different memory contents are targeted. This approach avoids confounding effects of retrieval success. We investigated the neural correlates of retrieval orientation in reality monitoring with event-related potentials (ERPs) and assessed the impact of retrieval accuracy on obtained ERP measures. Thirty-two subjects studied visually presented object names that were followed either by a picture of that object (perceived condition) or by the instruction to mentally generate such a picture (imagine condition). Subsequently, subjects had to identify object names of one study condition and reject object names of the second study condition together with newly presented object names. The data analysis showed that object names were more accurately identified when they had been presented in the perceived condition. Two topographically distinct ERP effects of retrieval orientation were revealed: From 600 to 1100 ms after stimulus representation, ERPs were more positive at frontal electrode sites when object names from the imagine condition were targeted. The analysis of response-locked ERP data revealed an additional effect at posterior electrode sites, with more negative ERPs shortly after response onset when items from the imagine condition were targeted. The ERP effect at frontal electrode sites, but not at posterior electrode sites was modulated by relative memory accuracy, with stronger effects in subjects who had lower memory accuracy for items of the imagine condition. The findings are suggestive for a contribution of frontal brain areas to retrieval orientation processes in reality monitoring and indicate that neural correlates of retrieval orientation can be modulated by retrieval effort, with stronger activation of these correlates with increasing task demands. Copyright © 2010 Elsevier Inc. All rights reserved.
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.
Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija
2015-08-01
A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter.
Colors in mind: a novel paradigm to investigate pure color imagery.
Wantz, Andrea L; Borst, Grégoire; Mast, Fred W; Lobmaier, Janek S
2015-07-01
Mental color imagery abilities are commonly measured using paradigms that involve naming, judging, or comparing the colors of visual mental images of well-known objects (e.g., "Is a sunflower darker yellow than a lemon"?). Although this approach is widely used in patient studies, differences in the ability to perform such color comparisons might simply reflect participants' general knowledge of object colors rather than their ability to generate accurate visual mental images of the colors of the objects. The aim of the present study was to design a new color imagery paradigm. Participants were asked to visualize a color for 3 s and then to determine a visually presented color by pressing 1 of 6 keys. We reasoned that participants would react faster when the imagined and perceived colors were congruent than when they were incongruent. In Experiment 1, participants were slower in incongruent than congruent trials but only when they were instructed to visualize the colors. The results in Experiment 2 demonstrate that the congruency effect reported in Experiment 1 cannot be attributed to verbalization of the color that had to be visualized. Finally, in Experiment 3, the congruency effect evoked by mental imagery correlated with performance in a perceptual version of the task. We discuss these findings with respect to the mechanisms that underlie mental imagery and patients suffering from color imagery deficits. (c) 2015 APA, all rights reserved.
Thibaut, Miguel; Tran, Thi Ha Chau; Delerue, Céline; Boucart, Muriel
2015-05-01
Previous studies showed that people with age-related macular degeneration (AMD) can categorise a pre-defined target object or scene with high accuracy (above 80%). In these studies participants were asked to detect the target (e.g. an animal) in serial visual presentation. People with AMD must rely on peripheral vision which is more adapted to the low resolution required for detection than for the higher resolution required to identify a specific exemplar. We investigated the ability of people with central vision loss to identify photographs of objects and scenes. Photographs of isolated objects, natural scenes and objects in scenes were centrally displayed for 2 s each. Participants were asked to name the stimuli. We measured accuracy and naming times in 20 patients with AMD, 15 age-matched and 12 young controls. Accuracy was lower (by about 30%) and naming times were longer (by about 300 ms) in people with AMD than in age-matched controls in the three categories of images. Correct identification occurred in 62-66% of the stimuli for patients. More than 20% of the misidentifications resulted from a structural and/or semantic similarity between the object and the name (e.g. spectacles for dog plates or dolphin for shark). Accuracy and naming times did not differ significantly between young and older normally sighted participants indicating that the deficits resulted from pathology rather than to normal ageing. These results show that, in contrast to performance for categorisation of a single pre-defined target, people with central vision loss are impaired at identifying various objects and scenes. The decrease in accuracy and the increase in response times in patients with AMD indicate that peripheral vision might be sufficient for object and scene categorisation but not for precise scene or object identification. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
A normal' category-specific advantage for naming living things.
Laws, K R; Neve, C
1999-10-01
'Artefactual' accounts of category-specific disorders for living things have highlighted that compared to nonliving things, living things have lower name frequency, lower concept familiarity and greater visual complexity and greater within-category structural similarity or 'visual crowding' [7]. These hypotheses imply that deficits for living things are an exaggeration of some 'normal tendency'. Contrary to these notions, we found that normal subjects were consistently worse at naming nonliving than living things in a speeded presentation paradigm. Moreover, their naming was not predicted by concept familiarity, name frequency or visual complexity; however, a novel measure of visual familiarity (i.e. for the appearance of things) did significantly predict naming. We propose that under speeded conditions, normal subjects find nonliving things harder to name because their representations are less visually predictable than for living things (i.e. nonliving things show greater within-item structural variability). Finally, because nonliving things have multiple representations in the real world, this may lower the probability of finding impaired naming and recognition in this category.
Lévy-like diffusion in eye movements during spoken-language comprehension.
Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Lévy-like diffusion in eye movements during spoken-language comprehension
NASA Astrophysics Data System (ADS)
Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Patterns of neural activity predict picture-naming performance of a patient with chronic aphasia.
Lee, Yune Sang; Zreik, Jihad T; Hamilton, Roy H
2017-01-08
Naming objects represents a substantial challenge for patients with chronic aphasia. This could be in part because the reorganized compensatory language networks of persons with aphasia may be less stable than the intact language systems of healthy individuals. Here, we hypothesized that the degree of stability would be instantiated by spatially differential neural patterns rather than either increased or diminished amplitudes of neural activity within a putative compensatory language system. We recruited a chronic aphasic patient (KL; 66 year-old male) who exhibited a semantic deficit (e.g., often said "milk" for "cow" and "pillow" for "blanket"). Over the course of four behavioral sessions involving a naming task performed in a mock scanner, we identified visual objects that yielded an approximately 50% success rate. We then conducted two fMRI sessions in which the patient performed a naming task for multiple exemplars of those objects. Multivoxel pattern analysis (MVPA) searchlight revealed differential activity patterns associated with correct and incorrect trials throughout intact brain regions. The most robust and largest cluster was found in the right occipito-temporal cortex encompassing fusiform cortex, lateral occipital cortex (LOC), and middle occipital cortex, which may account for the patient's propensity for semantic naming errors. None of these areas were found by a conventional univariate analysis. By using an alternative approach, we extend current evidence for compensatory naming processes that operate through spatially differential patterns within the reorganized language system. Copyright © 2016 Elsevier Ltd. All rights reserved.
The roles of perceptual and conceptual information in face recognition.
Schwartz, Linoy; Yovel, Galit
2016-11-01
The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The roles of stimulus repetition and hemispheric activation in visual half-field asymmetries.
Sullivan, K F; McKeever, W F
1985-10-01
Hardyck, Tzeng, and Wang (1978, Brain and Language, 5, 56-71) hypothesized that ample repetition of a small number of stimuli is required in order to obtain VHF differences in tachistoscopic tasks. Four experiments, with varied levels of repetition, were conducted to test this hypothesis. Three experiments utilized the general task of object-picture naming and one utilized a word-naming task. Naming latencies constituted the dependent measure. The results demonstrate that for the object-naming paradigm repetition is required for RVF superiority to emerge. Repetition was found to be unnecessary for RVF superiority in the word-naming paradigm, with repetition actually reducing RVF superiority. Experiment I suggested the possibility that RVF superiority developed for the second half of the trials as a function of practice or hemispheric activation, regardless of repetition level. Subsequent experiments, better designed to assess this possibility, clearly refuted it. It was concluded that the effect of repetition depends on the processing requirements of the task. We propose that, for tasks which can be processed efficiently by one hemisphere, the effect of repetition will be to reduce VHF asymmetries; but tasks requiring substantial processing by both hemispheres will show shifts to RVF superiority as a function of repetition.
Speakers of different languages process the visual world differently.
Chabal, Sarah; Marian, Viorica
2015-06-01
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).
Ultrafast scene detection and recognition with limited visual information
Hagmann, Carl Erick; Potter, Mary C.
2016-01-01
Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263
Automatic textual annotation of video news based on semantic visual object extraction
NASA Astrophysics Data System (ADS)
Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem
2003-12-01
In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.
Real-world visual statistics and infants' first-learned object names.
Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B
2017-01-05
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Size matters: bigger is faster.
Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E
2009-06-01
A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.
Improving language mapping in clinical fMRI through assessment of grammar.
Połczyńska, Monika; Japardi, Kevin; Curtiss, Susan; Moody, Teena; Benjamin, Christopher; Cho, Andrew; Vigil, Celia; Kuhn, Taylor; Jones, Michael; Bookheimer, Susan
2017-01-01
Brain surgery in the language dominant hemisphere remains challenging due to unintended post-surgical language deficits, despite using pre-surgical functional magnetic resonance (fMRI) and intraoperative cortical stimulation. Moreover, patients are often recommended not to undergo surgery if the accompanying risk to language appears to be too high. While standard fMRI language mapping protocols may have relatively good predictive value at the group level, they remain sub-optimal on an individual level. The standard tests used typically assess lexico-semantic aspects of language, and they do not accurately reflect the complexity of language either in comprehension or production at the sentence level. Among patients who had left hemisphere language dominance we assessed which tests are best at activating language areas in the brain. We compared grammar tests (items testing word order in actives and passives, wh -subject and object questions, relativized subject and object clauses and past tense marking) with standard tests (object naming, auditory and visual responsive naming), using pre-operative fMRI. Twenty-five surgical candidates (13 females) participated in this study. Sixteen patients presented with a brain tumor, and nine with epilepsy. All participants underwent two pre-operative fMRI protocols: one including CYCLE-N grammar tests (items testing word order in actives and passives, wh-subject and object questions, relativized subject and object clauses and past tense marking); and a second one with standard fMRI tests (object naming, auditory and visual responsive naming). fMRI activations during performance in both protocols were compared at the group level, as well as in individual candidates. The grammar tests generated more volume of activation in the left hemisphere (left/right angular gyrus, right anterior/posterior superior temporal gyrus) and identified additional language regions not shown by the standard tests (e.g., left anterior/posterior supramarginal gyrus). The standard tests produced more activation in left BA 47. Ten participants had more robust activations in the left hemisphere in the grammar tests and two in the standard tests. The grammar tests also elicited substantial activations in the right hemisphere and thus turned out to be superior at identifying both right and left hemisphere contribution to language processing. The grammar tests may be an important addition to the standard pre-operative fMRI testing.
Visual search for arbitrary objects in real scenes
Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.
2011-01-01
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156
Visual search for arbitrary objects in real scenes.
Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M
2011-08-01
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.
Parameterized hardware description as object oriented hardware model implementation
NASA Astrophysics Data System (ADS)
Drabik, Pawel K.
2010-09-01
The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.
Mapping language to visual referents: Does the degree of image realism matter?
Saryazdi, Raheleh; Chambers, Craig G
2018-01-01
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images. A custom stimulus set was first created by generating clipart images directly from photographs of real objects. Two visual world experiments were then conducted, varying whether referent identification was driven by noun or verb information. A modest benefit for clipart stimuli was observed during real-time processing, but only for noun-driving mappings. The results are discussed in terms of their implications for studies of visually situated language processing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Inferring difficulty: Flexibility in the real-time processing of disfluency
Heller, Daphna; Arnold, Jennifer E.; Klein, Natalie M.; Tanenhaus, Michael K.
2015-01-01
Upon hearing a disfluent referring expression, listeners expect the speaker to refer to an object that is previously-unmentioned, an object that does not have a straightforward label, or an object that requires a longer description. Two visual-world eye-tracking experiments examined whether listeners directly associate disfluency with these properties of objects, or whether disfluency attribution is more flexible and involves situation-specific inferences. Since in natural situations reference to objects that do not have a straightforward label or that require a longer description is correlated with both production difficulty and with disfluency, we used a mini artificial lexicon to dissociate difficulty from these properties, building on the fact that recently-learned names take longer to produce than existing words in one’s mental lexicon. The results demonstrate that disfluency attribution involves situation-specific inferences; we propose that in new situations listeners spontaneously infer what may cause production difficulty. However, the results show that these situation-specific inferences are limited in scope: listeners assessed difficulty relative to their own experience with the artificial names, and did not adapt to the assumed knowledge of the speaker. PMID:26677642
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
A Novel Locally Linear KNN Method With Applications to Visual Recognition.
Liu, Qingfeng; Liu, Chengjun
2017-09-01
A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
Zimmer, Hubert D; Lehnert, Günther
2006-01-01
If configurations of objects are presented in a S1-S2 matching task for the identity of objects a spatial mismatch effect occurs. Changing the (irrelevant) spatial layout lengthens response times. We investigated what causes this effect. We observed a reliable mismatch effect that was not influenced by a secondary task during maintenance. Neither articulatory suppression (Experiment 1), nor unattended (Experiments 2 and 6) or attended visual material (Experiment 3) reduced the effect, and this was independent of the length of the retention interval (Experiment 6). The effect was also rather independent of the visual appearance of the local elements. It was of similar size with color patches (Experiment 4) and with completely different surface information when testing was cross modal (Experiment 5), and the name-ability of the global configuration was not relevant (Experiments 6 and 7). In contrast, the figurative similarity of the configurations of S1 and S2 systematically influenced the size of the spatial mismatch effect (Experiment 7). We conclude that the spatial mismatch effect is caused by a mismatch of the global shape of the configuration stored together with the objects of S1 and not by a mismatch of templates of perceptual records maintained in a visual cache.
Semantic representation in the white matter pathway
Fang, Yuxing; Wang, Xiaosha; Zhong, Suyu; Song, Luping; Han, Zaizhu; Gong, Gaolang
2018-01-01
Object conceptual processing has been localized to distributed cortical regions that represent specific attributes. A challenging question is how object semantic space is formed. We tested a novel framework of representing semantic space in the pattern of white matter (WM) connections by extending the representational similarity analysis (RSA) to structural lesion pattern and behavioral data in 80 brain-damaged patients. For each WM connection, a neural representational dissimilarity matrix (RDM) was computed by first building machine-learning models with the voxel-wise WM lesion patterns as features to predict naming performance of a particular item and then computing the correlation between the predicted naming score and the actual naming score of another item in the testing patients. This correlation was used to build the neural RDM based on the assumption that if the connection pattern contains certain aspects of information shared by the naming processes of these two items, models trained with one item should also predict naming accuracy of the other. Correlating the neural RDM with various cognitive RDMs revealed that neural patterns in several WM connections that connect left occipital/middle temporal regions and anterior temporal regions associated with the object semantic space. Such associations were not attributable to modality-specific attributes (shape, manipulation, color, and motion), to peripheral picture-naming processes (picture visual similarity, phonological similarity), to broad semantic categories, or to the properties of the cortical regions that they connected, which tended to represent multiple modality-specific attributes. That is, the semantic space could be represented through WM connection patterns across cortical regions representing modality-specific attributes. PMID:29624578
Commognitive analysis of undergraduate mathematics students' first encounter with the subgroup test
NASA Astrophysics Data System (ADS)
Ioannou, Marios
2018-06-01
This study analyses learning aspects of undergraduate mathematics students' first encounter with the subgroup test, using the commognitive theoretical framework. It focuses on students' difficulties as these are related to the object-level and metalevel mathematical learning in group theory, and, when possible, highlights any commognitive conflicts. In the data analysis, one can identify three types of difficulties, relevant to object-level learning: namely regarding the frequently observed confusion between groups and sets, the object-level rules of visual mediators, and the object-level rules of contextual notions, such as permutations, exponentials, sets and matrices. In addition, data analysis suggests two types of difficulties, relevant to metalevel learning. The first refers to the actual proof that the three conditions of subgroup test hold, and the second is related to syntactic inaccuracies, incomplete argumentation and problematic use of visual mediators. Finally, this study suggests that there are clear links between object-level and metalevel learning, mainly due to the fact that objectification of the various relevant mathematical notions influences the endorsement of the governing metarules.
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
History and future of visual anthropology.
Svilicić, Niksa
2011-03-01
Visual recording of communication processes between communities or individuals by means of filming of photographing is of significant importance in anthropology, as it documents on site the specific features of various social communities in their encounter with the researcher. In terms of film industry, it is a sort of ethno-documentary pursuing originality and objectivity in recording the given subject, thus fulfilling the research mission. However, the potential of visual anthropology significantly exceeds the mere audiovisual recording of ethnologic realities. Modern methods of analysing and evaluating the role of visual anthropology suggest that it is a technical research service aimed at documenting the status quo. If the direction of proactive approach were taken, then the term ,visual anthropology' could be changed to ,anthropology of the visual,. This apparently cosmetic change of name is actually significantly more accurate, suggesting the denoted proactive swift in perceiving visual anthropology, where visual methods are employed to ,provoke< the reaction of an individual or of the community. In this way the "anthropology of the visual, is promoted to a new scientific sub-anthropological discipline.
Physical Activity Is Positively Associated with Episodic Memory in Aging.
Hayes, Scott M; Alosco, Michael L; Hayes, Jasmeet P; Cadden, Margaret; Peterson, Kristina M; Allsup, Kelly; Forman, Daniel E; Sperling, Reisa A; Verfaellie, Mieke
2015-11-01
Aging is associated with performance reductions in executive function and episodic memory, although there is substantial individual variability in cognition among older adults. One factor that may be positively associated with cognition in aging is physical activity. To date, few studies have objectively assessed physical activity in young and older adults, and examined whether physical activity is differentially associated with cognition in aging. Young (n=29, age 18-31 years) and older adults (n=31, ages 55-82 years) completed standardized neuropsychological testing to assess executive function and episodic memory capacities. An experimental face-name relational memory task was administered to augment assessment of episodic memory. Physical activity (total step count and step rate) was objectively assessed using an accelerometer, and hierarchical regressions were used to evaluate relationships between cognition and physical activity. Older adults performed more poorly on tasks of executive function and episodic memory. Physical activity was positively associated with a composite measure of visual episodic memory and face-name memory accuracy in older adults. Physical activity associations with cognition were independent of sedentary behavior, which was negatively correlated with memory performance. Physical activity was not associated with cognitive performance in younger adults. Physical activity is positively associated with episodic memory performance in aging. The relationship appears to be strongest for face-name relational memory and visual episodic memory, likely attributable to the fact that these tasks make strong demands on the hippocampus. The results suggest that physical activity relates to cognition in older, but not younger adults.
A case of complex regional pain syndrome with agnosia for object orientation.
Robinson, Gail; Cohen, Helen; Goebel, Andreas
2011-07-01
This systematic investigation of the neurocognitive correlates of complex regional pain syndrome (CRPS) in a single case also reports agnosia for object orientation in the context of persistent CRPS. We report a patient (JW) with severe long-standing CRPS who had no difficulty identifying and naming line drawings of objects presented in 1 of 4 cardinal orientations. In contrast, he was extremely poor at reorienting these objects into the correct upright orientation and in judging whether an object was upright or not. Moreover, JW made orientation errors when copying drawings of objects, and he also showed features of mirror reversal in writing single words and reading single letters. The findings are discussed in relation to accounts of visual processing. Agnosia for object orientation is the term for impaired knowledge of an object's orientation despite good recognition and naming of the same misoriented object. This defect has previously only been reported in patients with major structural brain lesions. The neuroanatomical correlates are discussed. The patient had no structural brain lesion, raising the possibility that nonstructural reorganisation of cortical networks may be responsible for his deficits. Other patients with CRPS may have related neurocognitive defects. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Cultural differences in visual object recognition in 3-year-old children
Kuwabara, Megumi; Smith, Linda B.
2016-01-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576
Cultural differences in visual object recognition in 3-year-old children.
Kuwabara, Megumi; Smith, Linda B
2016-07-01
Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.
Testing memory for unseen visual stimuli in patients with extinction and spatial neglect.
Vuilleumier, Patrik; Schwartz, Sophie; Clarke, Karen; Husain, Masud; Driver, Jon
2002-08-15
Visual extinction after right parietal damage involves a loss of awareness for stimuli in the contralesional field when presented concurrently with ipsilesional stimuli, although contralesional stimuli are still perceived if presented alone. However, extinguished stimuli can still receive some residual on-line processing, without awareness. Here we examined whether such residual processing of extinguished stimuli can produce implicit and/or explicit memory traces lasting many minutes. We tested four patients with right parietal damage and left extinction on two sessions, each including distinct study and subsequent test phases. At study, pictures of objects were shown briefly in the right, left, or both fields. Patients were asked to name them without memory instructions (Session 1) or to make an indoor/outdoor categorization and memorize them (Session 2). They extinguished most left stimuli on bilateral presentation. During the test (up to 48 min later), fragmented pictures of the previously exposed objects (or novel objects) were presented alone in either field. Patients had to identify each object and then judge whether it had previously been exposed. Identification of fragmented pictures was better for previously exposed objects that had been consciously seen and critically also for objects that had been extinguished (as compared with novel objects), with no influence of the depth of processing during study. By contrast, explicit recollection occurred only for stimuli that were consciously seen at study and increased with depth of processing. These results suggest implicit but not explicit memory for extinguished visual stimuli in parietal patients.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Object memory and change detection: dissociation as a function of visual and conceptual similarity.
Yeh, Yei-Yu; Yang, Cheng-Ta
2008-01-01
People often fail to detect a change between two visual scenes, a phenomenon referred to as change blindness. This study investigates how a post-change object's similarity to the pre-change object influences memory of the pre-change object and affects change detection. The results of Experiment 1 showed that similarity lowered detection sensitivity but did not affect the speed of identifying the pre-change object, suggesting that similarity between the pre- and post-change objects does not degrade the pre-change representation. Identification speed for the pre-change object was faster than naming the new object regardless of detection accuracy. Similarity also decreased detection sensitivity in Experiment 2 but improved the recognition of the pre-change object under both correct detection and detection failure. The similarity effect on recognition was greatly reduced when 20% of each pre-change stimulus was masked by random dots in Experiment 3. Together the results suggest that the level of pre-change representation under detection failure is equivalent to the level under correct detection and that the pre-change representation is almost complete. Similarity lowers detection sensitivity but improves explicit access in recognition. Dissociation arises between recognition and change detection as the two judgments rely on the match-to-mismatch signal and mismatch-to-match signal, respectively.
Seeing Objects as Faces Enhances Object Detection.
Takahashi, Kohske; Watanabe, Katsumi
2015-10-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.
Seeing Objects as Faces Enhances Object Detection
Watanabe, Katsumi
2015-01-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219
NASA Astrophysics Data System (ADS)
Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.
Hope, Thomas M H; Leff, Alex P; Prejawa, Susan; Bruce, Rachel; Haigh, Zula; Lim, Louise; Ramsden, Sue; Oberhuber, Marion; Ludersdorfer, Philipp; Crinion, Jenny; Seghier, Mohamed L; Price, Cathy J
2017-06-01
Stroke survivors with acquired language deficits are commonly thought to reach a 'plateau' within a year of stroke onset, after which their residual language skills will remain stable. Nevertheless, there have been reports of patients who appear to recover over years. Here, we analysed longitudinal change in 28 left-hemisphere stroke patients, each more than a year post-stroke when first assessed-testing each patient's spoken object naming skills and acquiring structural brain scans twice. Some of the patients appeared to improve over time while others declined; both directions of change were associated with, and predictable given, structural adaptation in the intact right hemisphere of the brain. Contrary to the prevailing view that these patients' language skills are stable, these results imply that real change continues over years. The strongest brain-behaviour associations (the 'peak clusters') were in the anterior temporal lobe and the precentral gyrus. Using functional magnetic resonance imaging, we confirmed that both regions are actively involved when neurologically normal control subjects name visually presented objects, but neither appeared to be involved when the same participants used a finger press to make semantic association decisions on the same stimuli. This suggests that these regions serve word-retrieval or articulatory functions in the undamaged brain. We teased these interpretations apart by reference to change in other tasks. Consistent with the claim that the real change is occurring here, change in spoken object naming was correlated with change in two other similar tasks, spoken action naming and written object naming, each of which was independently associated with structural adaptation in similar (overlapping) right hemisphere regions. Change in written object naming, which requires word-retrieval but not articulation, was also significantly more correlated with both (i) change in spoken object naming; and (ii) structural adaptation in the two peak clusters, than was change in another task-auditory word repetition-which requires articulation but not word retrieval. This suggests that the changes in spoken object naming reflected variation at the level of word-retrieval processes. Surprisingly, given their qualitatively similar activation profiles, hypertrophy in the anterior temporal region was associated with improving behaviour, while hypertrophy in the precentral gyrus was associated with declining behaviour. We predict that either or both of these regions might be fruitful targets for neural stimulation studies (suppressing the precentral region and/or enhancing the anterior temporal region), aiming to encourage recovery or arrest decline even years after stroke occurs. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.
Leff, Alex P.; Prejawa, Susan; Bruce, Rachel; Haigh, Zula; Lim, Louise; Ramsden, Sue; Oberhuber, Marion; Ludersdorfer, Philipp; Crinion, Jenny; Seghier, Mohamed L.; Price, Cathy J.
2017-01-01
Abstract Stroke survivors with acquired language deficits are commonly thought to reach a ‘plateau’ within a year of stroke onset, after which their residual language skills will remain stable. Nevertheless, there have been reports of patients who appear to recover over years. Here, we analysed longitudinal change in 28 left-hemisphere stroke patients, each more than a year post-stroke when first assessed—testing each patient’s spoken object naming skills and acquiring structural brain scans twice. Some of the patients appeared to improve over time while others declined; both directions of change were associated with, and predictable given, structural adaptation in the intact right hemisphere of the brain. Contrary to the prevailing view that these patients’ language skills are stable, these results imply that real change continues over years. The strongest brain–behaviour associations (the ‘peak clusters’) were in the anterior temporal lobe and the precentral gyrus. Using functional magnetic resonance imaging, we confirmed that both regions are actively involved when neurologically normal control subjects name visually presented objects, but neither appeared to be involved when the same participants used a finger press to make semantic association decisions on the same stimuli. This suggests that these regions serve word-retrieval or articulatory functions in the undamaged brain. We teased these interpretations apart by reference to change in other tasks. Consistent with the claim that the real change is occurring here, change in spoken object naming was correlated with change in two other similar tasks, spoken action naming and written object naming, each of which was independently associated with structural adaptation in similar (overlapping) right hemisphere regions. Change in written object naming, which requires word-retrieval but not articulation, was also significantly more correlated with both (i) change in spoken object naming; and (ii) structural adaptation in the two peak clusters, than was change in another task—auditory word repetition—which requires articulation but not word retrieval. This suggests that the changes in spoken object naming reflected variation at the level of word-retrieval processes. Surprisingly, given their qualitatively similar activation profiles, hypertrophy in the anterior temporal region was associated with improving behaviour, while hypertrophy in the precentral gyrus was associated with declining behaviour. We predict that either or both of these regions might be fruitful targets for neural stimulation studies (suppressing the precentral region and/or enhancing the anterior temporal region), aiming to encourage recovery or arrest decline even years after stroke occurs. PMID:28444235
A lexical semantic hub for heteromodal naming in middle fusiform gyrus.
Forseth, Kiefer James; Kadipasaoglu, Cihan Mehmet; Conner, Christopher Richard; Hickok, Gregory; Knight, Robert Thomas; Tandon, Nitin
2018-07-01
Semantic memory underpins our understanding of objects, people, places, and ideas. Anomia, a disruption of semantic memory access, is the most common residual language disturbance and is seen in dementia and following injury to temporal cortex. While such anomia has been well characterized by lesion symptom mapping studies, its pathophysiology is not well understood. We hypothesize that inputs to the semantic memory system engage a specific heteromodal network hub that integrates lexical retrieval with the appropriate semantic content. Such a network hub has been proposed by others, but has thus far eluded precise spatiotemporal delineation. This limitation in our understanding of semantic memory has impeded progress in the treatment of anomia. We evaluated the cortical structure and dynamics of the lexical semantic network in driving speech production in a large cohort of patients with epilepsy using electrocorticography (n = 64), functional MRI (n = 36), and direct cortical stimulation (n = 30) during two generative language processes that rely on semantic knowledge: visual picture naming and auditory naming to definition. Each task also featured a non-semantic control condition: scrambled pictures and reversed speech, respectively. These large-scale data of the left, language-dominant hemisphere uniquely enable convergent, high-resolution analyses of neural mechanisms characterized by rapid, transient dynamics with strong interactions between distributed cortical substrates. We observed three stages of activity during both visual picture naming and auditory naming to definition that were serially organized: sensory processing, lexical semantic processing, and articulation. Critically, the second stage was absent in both the visual and auditory control conditions. Group activity maps from both electrocorticography and functional MRI identified heteromodal responses in middle fusiform gyrus, intraparietal sulcus, and inferior frontal gyrus; furthermore, the spectrotemporal profiles of these three regions revealed coincident activity preceding articulation. Only in the middle fusiform gyrus did direct cortical stimulation disrupt both naming tasks while still preserving the ability to repeat sentences. These convergent data strongly support a model in which a distinct neuroanatomical substrate in middle fusiform gyrus provides access to object semantic information. This under-appreciated locus of semantic processing is at risk in resections for temporal lobe epilepsy as well as in trauma and strokes that affect the inferior temporal cortex-it may explain the range of anomic states seen in these conditions. Further characterization of brain network behaviour engaging this region in both healthy and diseased states will expand our understanding of semantic memory and further development of therapies directed at anomia.
A study of perceptual analysis in a high-level autistic subject with exceptional graphic abilities.
Mottron, L; Belleville, S
1993-11-01
We report here the case study of a patient (E.C.) with an Asperger syndrome, or autism with quasinormal intelligence, who shows an outstanding ability for three-dimensional drawing of inanimate objects (savant syndrome). An assessment of the subsystems proposed in recent models of object recognition evidenced intact perceptual analysis and identification. The initial (or primal sketch), viewer-centered (or 2-1/2-D), or object-centered (3-D) representations and the recognition and name levels were functional. In contrast, E.C.'s pattern of performance in three different types of tasks converge to suggest an anomaly in the hierarchical organization of the local and global parts of a figure: a local interference effect in incongruent hierarchical visual stimuli, a deficit in relating local parts to global form information in impossible figures, and an absence of feature-grouping in graphic recall. The results are discussed in relation to normal visual perception and to current accounts of the savant syndrome in autism.
Drane, Daniel L; Loring, David W; Voets, Natalie L; Price, Michele; Ojemann, Jeffrey G; Willie, Jon T; Saindane, Amit M; Phatak, Vaishali; Ivanisevic, Mirjana; Millis, Scott; Helmers, Sandra L; Miller, John W; Meador, Kimford J; Gross, Robert E
2015-01-01
Patients with temporal lobe epilepsy (TLE) experience significant deficits in category-related object recognition and naming following standard surgical approaches. These deficits may result from a decoupling of core processing modules (e.g., language, visual processing, and semantic memory), due to "collateral damage" to temporal regions outside the hippocampus following open surgical approaches. We predicted that stereotactic laser amygdalohippocampotomy (SLAH) would minimize such deficits because it preserves white matter pathways and neocortical regions that are critical for these cognitive processes. Tests of naming and recognition of common nouns (Boston Naming Test) and famous persons were compared with nonparametric analyses using exact tests between a group of 19 patients with medically intractable mesial TLE undergoing SLAH (10 dominant, 9 nondominant), and a comparable series of TLE patients undergoing standard surgical approaches (n=39) using a prospective, nonrandomized, nonblinded, parallel-group design. Performance declines were significantly greater for the patients with dominant TLE who were undergoing open resection versus SLAH for naming famous faces and common nouns (F=24.3, p<0.0001, η2=0.57, and F=11.2, p<0.001, η2=0.39, respectively), and for the patients with nondominant TLE undergoing open resection versus SLAH for recognizing famous faces (F=3.9, p<0.02, η2=0.19). When examined on an individual subject basis, no SLAH patients experienced any performance declines on these measures. In contrast, 32 of the 39 patients undergoing standard surgical approaches declined on one or more measures for both object types (p<0.001, Fisher's exact test). Twenty-one of 22 left (dominant) TLE patients declined on one or both naming tasks after open resection, while 11 of 17 right (nondominant) TLE patients declined on face recognition. Preliminary results suggest (1) naming and recognition functions can be spared in TLE patients undergoing SLAH, and (2) the hippocampus does not appear to be an essential component of neural networks underlying name retrieval or recognition of common objects or famous faces. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.
Phonological processing of ignored distractor pictures, an fMRI investigation.
Bles, Mart; Jansma, Bernadette M
2008-02-11
Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
VizieR Online Data Catalog: KiDS Survey for solar system objects mining (Mahlke+, 2018)
NASA Astrophysics Data System (ADS)
Mahlke, M.; Bouy, H.; Altieri, B.; Verdoes Kleijn, G.; Carry, B.; Bertin, E.; de Jong, J. T. A.; Kuijken, K.; McFarland, J.; Valentijn, E.
2017-10-01
Provided are the observations of the 28,290 SSO candidates recovered from the KiDS survey. The candidates are split up into two subsamples; the first contains 20,221 candidates with an estimated false-positive content of less than 0.05%. The second sample contains 8,069 candidates with only three observations each or close to bright stars, with an estimated false-positive content of approximately 24%. Provided are the recovered positions in right ascension and declination, the observation epochs, the calculated proper motions, the magnitudes, the observation bands, and the object name and expected visual magnitude if the object was matched to a SkyBoT object (entries are empty if no match was found). (2 data files).
Does object view influence the scene consistency effect?
Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko
2015-04-01
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
Effect of a synesthete's photisms on name recall.
Mills, Carol Bergfeld; Innis, Joanne; Westendorf, Taryn; Owsianiecki, Lauren; McDonald, Angela
2006-02-01
A multilingual, colored-letter synesthete professor (MLS), 9 nonsynesthete multilingual professors and 4 nonsynesthete art professors learned 30 names of individuals (first and last name pairs) in three trials. They recalled the names after each trial and six months later, as well as performed cued recall trials initially and after six months. As hypothesized, MLS recalled significantly more names than control groups on all free recall tests (except after the first trial) and on cued recall tests. In addition, MLS gave qualitatively different reasons for remembering names than any individual control participant. MLS gave mostly color reasons for remembering the names, whereas nonsynesthetes gave reasons based on familiarity or language or art knowledge. Results on standardized memory tests showed that MLS had average performance on non-language visual memory tests (the Benton Visual Retention Test-Revised--BURT-R, and the Rey-Osterrieth Complex Figure Test--CFT), but had superior memory performance on a verbal test consisting of lists of nouns (Rey Auditory-Verbal Learning Test--RAVLT). MLS's synesthesia seems to aid memory for visually or auditorily presented language stimuli (names and nouns), but not for non-language visual stimuli (simple and complex figures).
Deterministic object tracking using Gaussian ringlet and directional edge features
NASA Astrophysics Data System (ADS)
Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.
2017-10-01
Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.
Physical Activity Is Positively Associated with Episodic Memory in Aging
Hayes, Scott M.; Alosco, Michael L.; Hayes, Jasmeet P.; Cadden, Margaret; Peterson, Kristina M.; Allsup, Kelly; Forman, Daniel E.; Sperling, Reisa A.; Verfaellie, Mieke
2016-01-01
Aging is associated with performance reductions in executive function and episodic memory, although there is substantial individual variability in cognition among older adults. One factor that may be positively associated with cognition in aging is physical activity. To date, few studies have objectively assessed physical activity in young and older adults, and examined whether physical activity is differentially associated with cognition in aging. Young (n = 29, age 18–31 years) and older adults (n = 31, ages 55–82 years) completed standardized neuropsychological testing to assess executive function and episodic memory capacities. An experimental face-name relational memory task was administered to augment assessment of episodic memory. Physical activity (total step count and step rate) was objectively assessed using an accelerometer, and hierarchical regressions were used to evaluate relationships between cognition and physical activity. Older adults performed more poorly on tasks of executive function and episodic memory. Physical activity was positively associated with a composite measure of visual episodic memory and face-name memory accuracy in older adults. Physical activity associations with cognition were independent of sedentary behavior, which was negatively correlated with memory performance. Physical activity was not associated with cognitive performance in younger adults. Physical activity is positively associated with episodic memory performance in aging. The relationship appears to be strongest for face-name relational memory and visual episodic memory, likely attributable to the fact that these tasks make strong demands on the hippocampus. The results suggest that physical activity relates to cognition in older, but not younger adults. PMID:26581790
Visual aid titled 'The Magellan Mission to Venus'
NASA Technical Reports Server (NTRS)
1988-01-01
Visual aid titled 'The Magellan Mission to Venus' describes data that will be collected and science objectives. Images and brightness temperatures will be obtained for 70-90% of the surface, with a radar resolution of 360 meters or better. The global gravity field model will be refined by combining Magellan and Pioneer-Venus doppler data. Altimetry data will be used to measure the topography of 70-90% of the surface with a vertical accuracy of 120-360 meters. Science objectives include: to improve the knowledge of the geological history of Venus by analysis of the surface morphology and electrical properties and the processes that control them; and to improve the knowledge of the geophysics of Venus, principally its density distribution and dynamics. Magellan, named for the 16th century Portuguese explorer, will be deployed from the payload bay (PLB) of Atlantis, Orbiter Vehicle (OV) 104, during mission STS-30.
Objects and categories: feature statistics and object processing in the ventral stream.
Tyler, Lorraine K; Chiu, Shannon; Zhuang, Jie; Randall, Billi; Devereux, Barry J; Wright, Paul; Clarke, Alex; Taylor, Kirsten I
2013-10-01
Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions-the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.
A designated odor-language integration system in the human brain.
Olofsson, Jonas K; Hurley, Robert S; Bowman, Nicholas E; Bao, Xiaojun; Mesulam, M-Marsel; Gottfried, Jay A
2014-11-05
Odors are surprisingly difficult to name, but the mechanism underlying this phenomenon is poorly understood. In experiments using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the physiological basis of odor naming with a paradigm where olfactory and visual object cues were followed by target words that either matched or mismatched the cue. We hypothesized that word processing would not only be affected by its semantic congruency with the preceding cue, but would also depend on the cue modality (olfactory or visual). Performance was slower and less precise when linking a word to its corresponding odor than to its picture. The ERP index of semantic incongruity (N400), reflected in the comparison of nonmatching versus matching target words, was more constrained to posterior electrode sites and lasted longer on odor-cue (vs picture-cue) trials. In parallel, fMRI cross-adaptation in the right orbitofrontal cortex (OFC) and the left anterior temporal lobe (ATL) was observed in response to words when preceded by matching olfactory cues, but not by matching visual cues. Time-series plots demonstrated increased fMRI activity in OFC and ATL at the onset of the odor cue itself, followed by response habituation after processing of a matching (vs nonmatching) target word, suggesting that predictive perceptual representations in these regions are already established before delivery and deliberation of the target word. Together, our findings underscore the modality-specific anatomy and physiology of object identification in the human brain. Copyright © 2014 the authors 0270-6474/14/3414864-10$15.00/0.
Pires, Carla; Vigário, Marina; Cavaco, Afonso
2015-08-01
Among other regulatory requirements, medicine brands should be composed of single names without abbreviations to prevent errors in prescription of medication. The purposes of the study were to investigate the compliance of a sam ple of Portuguese medicine brand names with Portuguese pharmaceutical regulations. This includes identifying their basic linguistic characteristics and comparing these features and their frequency of occurrence with benchmark values of the colloquial or informal language. A sample of 474 brand names was selected. Names were analyzed using manual (visual analyses) and computer methods (FreP - Frequency Patterns of Phonological Objects in Portuguese and MS word). A significant number of names (61.3%) failed to comply with the Portuguese phonologic system (related to the sound of words) and/or the spelling system (related to the written form of words) contained more than one word, comprised a high proportion of infrequent syllable types or stress patterns and included abbreviations. The results suggest that some of the brand names of Portuguese medication should be reevaluated, and that regulation on this issue should be enforced and updated, taking into consideration specific linguistic and spelling codes.
Hillstrom, Anne P; Segabinazi, Joice D; Godwin, Hayward J; Liversedge, Simon P; Benson, Valerie
2017-02-19
We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
The Organization of Shape and Color in Vision and Art
Pinna, Baingio
2011-01-01
The aim of this work is to study the phenomenal organization of shape and color in vision and art in terms of microgenesis of the object perception and creation. The idea of “microgenesis” is that the object perception and creation takes time to develop. Our hypothesis is that the roles of shape and color are extracted in sequential order and in the same order these roles are also used by artists to paint objects. Boundary contours are coded before color contours. The microgenesis of the object formation was demonstrated (i) by introducing new conditions derived from the watercolor illusion, where the juxtaposed contours are displaced horizontally or vertically, and based on variations of Matisse’s Woman, (ii) by studying descriptions and replications of visual objects in adults and children of different ages, and (iii) by analyzing the linguistic sequence and organization in a free naming task of the attributes related to shape and color. The results supported the idea of the microgenesis of the object perception, namely the temporal order in the formation of the roles of the object properties (shape before color). Some general principles were extracted from the experimental results. They can be a starting point to explore a new domain focused on the microgenesis of shape and color within the more general problem of object organization, where integrated and multidisciplinary studies based on art and vision science can be very useful. PMID:22065954
Guasom Analysis Of The Alhambra Survey
NASA Astrophysics Data System (ADS)
Garabato, Daniel; Manteiga, Minia; Dafonte, Carlos; Álvarez, Marco A.
2017-10-01
GUASOM is a data mining tool designed for knowledge discovery in large astronomical spectrophotometric archives developed in the framework of Gaia DPAC (Data Processing and Analysis Consortium). Our tool is based on a type of unsupervised learning Artificial Neural Networks named Self-organizing maps (SOMs). SOMs permit the grouping and visualization of big amount of data for which there is no a priori knowledge and hence they are very useful for analyzing the huge amount of information present in modern spectrophotometric surveys. SOMs are used to organize the information in clusters of objects, as homogeneously as possible according to their spectral energy distributions, and to project them onto a 2D grid where the data structure can be visualized. Each cluster has a representative, called prototype which is a virtual pattern that better represents or resembles the set of input patterns belonging to such a cluster. Prototypes make easier the task of determining the physical nature and properties of the objects populating each cluster. Our algorithm has been tested on the ALHAMBRA survey spectrophotometric observations, here we present our results concerning the survey segmentation, visualization of the data structure, separation between types of objects (stars and galaxies), data homogeneity of neurons, cluster prototypes, redshift distribution and crossmatch with other databases (Simbad).
Tracking the impact of depression in a perspective-taking task.
Ferguson, Heather J; Cane, James
2017-11-01
Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a 'director' to move target objects (e.g. a "teapot with spots on") around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a "teapot with stars on"). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. "teapot"). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM.
Role of Visual Speech in Phonological Processing by Children With Hearing Loss
Jerger, Susan; Tye-Murray, Nancy; Abdi, Hervé
2011-01-01
Purpose This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in place of articulation, or conflicting in voicing—for example, the picture “pizza” coupled with the distractors “peach,” “teacher,” or “beast,” respectively. Speed of picture naming was measured. Results The conflicting conditions slowed naming, and phonological processing by children with HL displayed the age-related shift in sensitivity to visual speech seen in children with NH, although with developmental delay. Younger children with HL exhibited a disproportionately large influence of visual speech and a negligible influence of auditory speech, whereas older children with HL showed a robust influence of auditory speech with no benefit to performance from adding visual speech. The congruent conditions did not speed naming in children with HL, nor did the addition of visual speech influence performance. Unexpectedly, the /∧/-vowel congruent distractors slowed naming in children with HL and decreased articulatory proficiency. Conclusions Results for the conflicting conditions are consistent with the hypothesis that speech representations in children with HL (a) are initially disproportionally structured in terms of visual speech and (b) become better specified with age in terms of auditorily encoded information. PMID:19339701
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
Brand name confusion: Subjective and objective measures of orthographic similarity.
Burt, Jennifer S; McFarlane, Kimberley A; Kelly, Sarah J; Humphreys, Michael S; Weatherall, Kimberlee; Burrell, Robert G
2017-09-01
Determining brand name similarity is vital in areas of trademark registration and brand confusion. Students rated the orthographic (spelling) similarity of word pairs (Experiments 1, 2, and 4) and brand name pairs (Experiment 5). Similarity ratings were consistently higher when words shared beginnings rather than endings, whereas shared pronunciation of the stressed vowel had small and less consistent effects on ratings. In Experiment 3 a behavioral task confirmed the similarity of shared beginnings in lexical processing. Specifically, in a task requiring participants to decide whether 2 words presented in the clear (a probe and a later target) were the same or different, a masked prime word preceding the target shortened response latencies if it shared its initial 3 letters with the target. The ratings of students for word and brand name pairs were strongly predicted by metrics of orthographic similarity from the visual word identification literature based on the number of shared letters and their relative positions. The results indicate a potential use for orthographic metrics in brand name registration and trademark law. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Doing molecular biophysics: finding, naming, and picturing signal within complexity.
Richardson, Jane S; Richardson, David C
2013-01-01
A macromolecular structure, as measured data or as a list of coordinates or even on-screen as a full atomic model, is an extremely complex and confusing object. The underlying rules of how it folds, moves, and interacts as a biological entity are even less evident or intuitive to the human mind. To do science on such molecules, or to relate them usefully to higher levels of biology, we need to start with a natural history that names their features in meaningful ways and with multiple representations (visual or algebraic) that show some aspect of their organizing principles. The two of us have jointly enjoyed a highly varied and engrossing career in biophysical research over nearly 50 years. Our frequent changes of emphasis are tied together by two threads: first, by finding the right names, visualizations, and methods to help both ourselves and others to better understand the 3D structures of protein and RNA molecules, and second, by redefining the boundary between signal and noise for complex data, in both directions-sometimes identifying and promoting real signal up out of what seemed just noise, and sometimes demoting apparent signal into noise or systematic error. Here we relate parts of our scientific and personal lives, including ups and downs, influences, anecdotes, and guiding principles such as the title theme.
ERIC Educational Resources Information Center
Antzaka, Alexia; Martin, Clara; Caffarra, Sendy; Schlöffel, Sophie; Carreiras, Manuel; Lallier, Marie
2018-01-01
The present study investigated whether orthographic depth can increase the bias towards multi-letter processing in two reading-related skills: visual attention span (VAS) and rapid automatized naming (RAN). VAS (i.e., the number of visual elements that can be processed at once in a multi-element array) was tested with a visual 1-back task and RAN…
Kishk, Hanem; Elwan, Mohamed M.; Abouelkheir, Hossam Youssef
2018-01-01
Objectives To study the fitting and the visual rehabilitation obtained with a corneoscleral contact lens, namely, Rose K2 XL in patients with irregular cornea. Methods This prospective study included 36 eyes of 36 patients with irregular cornea fitted with Rose K2 XL. Refractive and visual outcomes and mesopic and aberrometric parameters of fitted eyes were assessed at 2 weeks, 3 months, and 6 months after the initial lens use. Objective and subjective parameters of patient satisfaction and lens comfort were noted. Causes of lens discontinuation and complications were also recorded. Results Average logMAR VA improved significantly from 0.95 ± 0.09 without correction to 0.04 ± 0.05 six months after lens wear. Similarly, mesopic and aberrometric measures were significantly improved. Statistical analysis of the subjective patients' responses showed a significant acceptance of the lens by most of them. At the end of follow-up, the mean wearing time was 9.9 ± 2.9 hours per day. The most common cause of wearing discontinuation was persistent discomfort (16.7%) and high lens expenses(16.7%). Self-assessed questionnaire showed statistically significant improvement in nearly all measured subjective parameters. Conclusion Rose K2 XL lenses provide patients with irregular cornea with both quantitative and qualitative optimal visual function with high degree of patient comfort and satisfaction. PMID:29484205
Bruffaerts, Rose; De Weer, An-Sofie; De Grauwe, Sophie; Thys, Miek; Dries, Eva; Thijs, Vincent; Sunaert, Stefan; Vandenbulcke, Mathieu; De Deyne, Simon; Storms, Gerrit; Vandenberghe, Rik
2014-09-01
We investigated the critical contribution of right ventral occipitotemporal cortex to knowledge of visual and functional-associative attributes of biological and non-biological entities and how this relates to category-specificity during confrontation naming. In a consecutive series of 7 patients with lesions confined to right ventral occipitotemporal cortex, we conducted an extensive assessment of oral generation of visual-sensory and functional-associative features in response to the names of biological and nonbiological entities. Subjects also performed a confrontation naming task for these categories. Our main novel finding related to a unique case with a small lesion confined to right medial fusiform gyrus who showed disproportionate naming impairment for nonbiological versus biological entities, specifically for tools. Generation of visual and functional-associative features was preserved for biological and non-biological entities. In two other cases, who had a relatively small posterior lesion restricted to primary visual and posterior fusiform cortex, retrieval of visual attributes was disproportionately impaired compared to functional-associative attributes, in particular for biological entities. However, these cases did not show a category-specific naming deficit. Two final cases with the largest lesions showed a classical dissociation between biological versus nonbiological entities during naming, with normal feature generation performance. This is the first lesion-based evidence of a critical contribution of the right medial fusiform cortex to tool naming. Second, dissociations along the dimension of attribute type during feature generation do not co-occur with category-specificity during naming in the current patient sample. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development of Embodied Word Meanings: Sensorimotor Effects in Children's Lexical Processing.
Inkster, Michelle; Wellsby, Michele; Lloyd, Ellen; Pexman, Penny M
2016-01-01
Previous research showed an effect of words' rated body-object interaction (BOI) in children's visual word naming performance, but only in children 8 years of age or older (Wellsby and Pexman, 2014a). In that study, however, BOI was established using adult ratings. Here we collected ratings from a group of parents for children's BOI experience (child-BOI). We examined effects of words' child-BOI and also words' imageability on children's responses in an auditory word naming task, which is suited to the lexical processing skills of younger children. We tested a group of 54 children aged 6-7 years and a comparison group of 25 adults. Results showed significant effects of both imageability and child-BOI on children's auditory naming latencies. These results provide evidence that children younger than 8 years of age have richer semantic representations for high imageability and high child-BOI words, consistent with an embodied account of word meaning.
Difference, Visual Narration, and "Point of View" in "My Name is Red"
ERIC Educational Resources Information Center
Cicekoglu, Feride
2003-01-01
This paper focuses on the difference between Eastern and Western ways of visual narration, taking as its frame of reference the novel "My Name is Red," by Turkish author Orhan Pamuk, winner of the 2003 International IMPAC Dublin Literary Award. This book is particularly important in terms of visual narration because it highlights the…
Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi
2016-01-01
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704
Vision Problems and Reduced Reading Outcomes in Queensland Schoolchildren.
Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M
2017-03-01
To assess the relationship between vision and reading outcomes in Indigenous and non-Indigenous schoolchildren to determine whether vision problems are associated with lower reading outcomes in these populations. Vision testing and reading assessments were performed on 508 Indigenous and non-Indigenous schoolchildren in Queensland, Australia divided into two age groups: Grades 1 and 2 (6-7 years of age) and Grades 6 and 7 (12-13 years of age). Vision parameters measured included cycloplegic refraction, near point of convergence, heterophoria, fusional vergence range, rapid automatized naming, and visual motor integration. The following vision conditions were then classified based on the vision findings: uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration. Reading accuracy and reading comprehension were measured with the Neale reading test. The effect of uncorrected hyperopia, convergence insufficiency, reduced rapid automatized naming, and delayed visual motor integration on reading accuracy and reading comprehension were investigated with ANCOVAs. The ANCOVAs explained a significant proportion of variance in both reading accuracy and reading comprehension scores in both age groups, with 40% of the variation in reading accuracy and 33% of the variation in reading comprehension explained in the younger age group, and 27% and 10% of the variation in reading accuracy and reading comprehension, respectively, in the older age group. The vision parameters of visual motor integration and rapid automatized naming were significant predictors in all ANCOVAs (P < .01). The direction of the relationship was such that reduced reading results were explained by reduced visual motor integration and rapid automatized naming results. Both reduced rapid automatized naming and visual motor integration were associated with poorer reading outcomes in Indigenous and non-Indigenous children. This is an important finding given the recent emphasis placed on Indigenous children's reading skills and the fact that reduced rapid automatized naming and visual motor integration skills are more common in this group.
The role of color information on object recognition: a review and meta-analysis.
Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís
2011-09-01
In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition. Copyright © 2011 Elsevier B.V. All rights reserved.
Rupp, Kyle; Roos, Matthew; Milsap, Griffin; Caceres, Carlos; Ratto, Christopher; Chevillet, Mark; Crone, Nathan E; Wolmetz, Michael
2017-03-01
Non-invasive neuroimaging studies have shown that semantic category and attribute information are encoded in neural population activity. Electrocorticography (ECoG) offers several advantages over non-invasive approaches, but the degree to which semantic attribute information is encoded in ECoG responses is not known. We recorded ECoG while patients named objects from 12 semantic categories and then trained high-dimensional encoding models to map semantic attributes to spectral-temporal features of the task-related neural responses. Using these semantic attribute encoding models, untrained objects were decoded with accuracies comparable to whole-brain functional Magnetic Resonance Imaging (fMRI), and we observed that high-gamma activity (70-110Hz) at basal occipitotemporal electrodes was associated with specific semantic dimensions (manmade-animate, canonically large-small, and places-tools). Individual patient results were in close agreement with reports from other imaging modalities on the time course and functional organization of semantic processing along the ventral visual pathway during object recognition. The semantic attribute encoding model approach is critical for decoding objects absent from a training set, as well as for studying complex semantic encodings without artificially restricting stimuli to a small number of semantic categories. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Neo: an object model for handling electrophysiology data in multiple formats
Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.
2014-01-01
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386
Neo: an object model for handling electrophysiology data in multiple formats.
Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P
2014-01-01
Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.
Evolution of mammographic image quality in the state of Rio de Janeiro*
Villar, Vanessa Cristina Felippe Lopes; Seta, Marismary Horsth De; de Andrade, Carla Lourenço Tavares; Delamarque, Elizabete Vianna; de Azevedo, Ana Cecília Pedrosa
2015-01-01
Objective To evaluate the evolution of mammographic image quality in the state of Rio de Janeiro on the basis of parameters measured and analyzed during health surveillance inspections in the period from 2006 to 2011. Materials and Methods Descriptive study analyzing parameters connected with imaging quality of 52 mammography apparatuses inspected at least twice with a one-year interval. Results Amongst the 16 analyzed parameters, 7 presented more than 70% of conformity, namely: compression paddle pressure intensity (85.1%), films development (72.7%), film response (72.7%), low contrast fine detail (92.2%), tumor mass visualization (76.5%), absence of image artifacts (94.1%), mammography-specific developers availability (88.2%). On the other hand, relevant parameters were below 50% conformity, namely: monthly image quality control testing (28.8%) and high contrast details with respect to microcalcifications visualization (47.1%). Conclusion The analysis revealed critical situations in terms of compliance with the health surveillance standards. Priority should be given to those mammography apparatuses that remained non-compliant at the second inspection performed within the one-year interval. PMID:25987749
Blindness to background: an inbuilt bias for visual objects.
O'Hanlon, Catherine G; Read, Jenny C A
2017-09-01
Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.
Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory
2017-05-01
The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Warmington, Meesha; Hulme, Charles
2012-01-01
This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…
You Can Touch This! Bringing HST images to life as 3-D models
NASA Astrophysics Data System (ADS)
Christian, Carol A.; Nota, A.; Grice, N. A.; Sabbi, E.; Shaheen, N.; Greenfield, P.; Hurst, A.; Kane, S.; Rao, R.; Dutterer, J.; de Mink, S. E.
2014-01-01
We present the very first results of an innovative process to transform Hubble images into tactile 3-D models of astronomical objects. We have created a very new, unique tool for understanding astronomical phenomena, especially designed to make astronomy accessible to visually impaired children and adults. From the multicolor images of stellar clusters, we construct 3-D computer models that are digitally sliced into layers, each featuring touchable patterning and Braille characters, and are printed on a 3-D printer. The slices are then fitted together, so that the user can explore the structure of the cluster environment with their fingertips, slice-by-slice, analogous to a visual fly-through. Students will be able to identify and spatially locate the different components of these complex astronomical objects, namely gas, dust and stars, and will learn about the formation and composition of stellar clusters. The primary audiences for the 3D models are middle school and high school blind students and, secondarily, blind adults. However, we believe that the final materials will address a broad range of individuals with varied and multi-sensory learning styles, and will be interesting and visually appealing to the public at large.
An odor identification approach based on event-related pupil dilation and gaze focus.
Aguillon-Hernandez, Nadia; Naudin, Marine; Roché, Laëtitia; Bonnet-Brilhault, Frédérique; Belzung, Catherine; Martineau, Joëlle; Atanasova, Boriana
2015-06-01
Olfactory disorders constitute a potential marker of many diseases and are considered valuable clues to the diagnosis and evaluation of progression for many disorders. The most commonly used test for the evaluation of impairments of olfactory identification requires the active participation of the subject, who must select the correct name of the perceived odor from a list. An alternative method is required because speech may be impaired or not yet learned in many patients. As odor identification is known to be facilitated by searching for visual clues, we aimed to develop an objective, vision-based approach for the evaluation of odor identification. We used an eye tracking method to quantify pupillary and ocular responses during the simultaneous presentation of olfactory and visual stimuli, in 39 healthy participants aged from 19 to 77years. Odor presentation triggered an increase in pupil dilation and gaze focus on the picture corresponding to the odor presented. These results suggest that odorant stimuli increase recruitment of the sympathetic system (as demonstrated by the reactivity of the pupil) and draw attention to the visual clue. These results validate the objectivity of this method. Copyright © 2015 Elsevier B.V. All rights reserved.
Valente, Andrea; Bürki, Audrey; Laganaro, Marina
2014-01-01
A major effort in cognitive neuroscience of language is to define the temporal and spatial characteristics of the core cognitive processes involved in word production. One approach consists in studying the effects of linguistic and pre-linguistic variables in picture naming tasks. So far, studies have analyzed event-related potentials (ERPs) during word production by examining one or two variables with factorial designs. Here we extended this approach by investigating simultaneously the effects of multiple theoretical relevant predictors in a picture naming task. High density EEG was recorded on 31 participants during overt naming of 100 pictures. ERPs were extracted on a trial by trial basis from picture onset to 100 ms before the onset of articulation. Mixed-effects regression models were conducted to examine which variables affected production latencies and the duration of periods of stable electrophysiological patterns (topographic maps). Results revealed an effect of a pre-linguistic variable, visual complexity, on an early period of stable electric field at scalp, from 140 to 180 ms after picture presentation, a result consistent with the proposal that this time period is associated with visual object recognition processes. Three other variables, word Age of Acquisition, Name Agreement, and Image Agreement influenced response latencies and modulated ERPs from ~380 ms to the end of the analyzed period. These results demonstrate that a topographic analysis fitted into the single trial ERPs and covering the entire processing period allows one to associate the cost generated by psycholinguistic variables to the duration of specific stable electrophysiological processes and to pinpoint the precise time-course of multiple word production predictors at once.
Visual object tracking by correlation filters and online learning
NASA Astrophysics Data System (ADS)
Zhang, Xin; Xia, Gui-Song; Lu, Qikai; Shen, Weiming; Zhang, Liangpei
2018-06-01
Due to the complexity of background scenarios and the variation of target appearance, it is difficult to achieve high accuracy and fast speed for object tracking. Currently, correlation filters based trackers (CFTs) show promising performance in object tracking. The CFTs estimate the target's position by correlation filters with different kinds of features. However, most of CFTs can hardly re-detect the target in the case of long-term tracking drifts. In this paper, a feature integration object tracker named correlation filters and online learning (CFOL) is proposed. CFOL estimates the target's position and its corresponding correlation score using the same discriminative correlation filter with multi-features. To reduce tracking drifts, a new sampling and updating strategy for online learning is proposed. Experiments conducted on 51 image sequences demonstrate that the proposed algorithm is superior to the state-of-the-art approaches.
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Another Function for Language and its Theoretical Consequences
NASA Astrophysics Data System (ADS)
Barahona da Fonseca, Isabel; Barahona da Fonseca, José; Simões da Fonseca, José
2006-06-01
Our proposal is that when they exercise the faculty of "parole" subjects use strategies characterized by an internal reconstruction of objects which acquire a status similar to the imperative believe in the representation of reality as it occurs in visual or auditory perception. The referent of verbal expressions acquires a greater importance for the subject who uses it according more to rhetoric principles than through logical critical analysis. Consequences concerning psychopathology, namely the phenomena of hallucination are explained on that basis.
NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data
Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.
2005-01-01
NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.
Marful, Alejandra; Paolieri, Daniela; Bajo, M Teresa
2014-04-01
A current debate regarding face and object naming concerns whether they are equally vulnerable to semantic interference. Although some studies have shown similar patterns of interference, others have revealed different effects for faces and objects. In Experiment 1, we compared face naming to object naming when exemplars were presented in a semantically homogeneous context (grouped by their category) or in a semantically heterogeneous context (mixed) across four cycles. The data revealed significant slowing for both face and object naming in the homogeneous context. This semantic interference was explained as being due to lexical competition from the conceptual activation of category members. When focusing on the first cycle, a facilitation effect for objects but not for faces appeared. This result permits us to explain the previously observed discrepancies between face and object naming. Experiment 2 was identical to Experiment 1, with the exception that half of the stimuli were presented as face/object names for reading. Semantic interference was present for both face and object naming, suggesting that faces and objects behave similarly during naming. Interestingly, during reading, semantic interference was observed for face names but not for object names. This pattern is consistent with previous assumptions proposing the activation of a person identity during face name reading.
Krishnan, Saloni; Leech, Robert; Mercure, Evelyne; Lloyd-Fox, Sarah; Dick, Frederic
2015-01-01
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity. PMID:24907249
Collette, Cynthia; Bonnotte, Isabelle; Jacquemont, Charlotte; Kalénine, Solène; Bartolo, Angela
2016-01-01
Object semantics include object function and manipulation knowledge. Function knowledge refers to the goal attainable by using an object (e.g., the function of a key is to open or close a door) while manipulation knowledge refers to gestures one has to execute to use an object appropriately (e.g., a key is held between the thumb and the index, inserted into the door lock and then turned). To date, several studies have assessed function and manipulation knowledge in brain lesion patients as well as in healthy adult populations. In patients with left brain damage, a double dissociation between these two types of knowledge has been reported; on the other hand, behavioral studies in healthy adults show that function knowledge is processed faster than manipulation knowledge. Empirical evidence has shown that object interaction in children differs from that in adults, suggesting that the access to function and manipulation knowledge in children might also differ. To investigate the development of object function and manipulation knowledge, 51 typically developing 8-9-10 year-old children and 17 healthy young adults were tested on a naming task associated with a semantic priming paradigm (190-ms SOA; prime duration: 90 ms) in which a series of line drawings of manipulable objects were used. Target objects could be preceded by three priming contexts: related (e.g., knife-scissors for function; key-screwdriver for manipulation), unrelated but visually similar (e.g., glasses-scissors; baseball bat-screwdriver), and purely unrelated (e.g., die-scissors; tissue-screwdriver). Results showed a different developmental pattern of function and manipulation priming effects. Function priming effects were not present in children and emerged only in adults, with faster naming responses for targets preceded by objects sharing the same function. In contrast, manipulation priming effects were already present in 8-year-olds with faster naming responses for targets preceded by objects sharing the same manipulation and these decreased linearly between 8 and 10 years of age, 10-year-olds not differing from adults. Overall, results show that the access to object function and manipulation knowledge changes during development by favoring manipulation knowledge in childhood and function knowledge in adulthood. PMID:27602004
A selective deficit in imageable concepts: a window to the organization of the conceptual system.
Gvion, Aviah; Friedmann, Naama
2013-01-01
Nissim, a 64 years old Hebrew-speaking man who sustained an ischemic infarct in the left occipital lobe, exhibited an intriguing pattern. He could hold a deep and fluent conversation about abstract and complex issues, such as the social risks in unemployment, but failed to retrieve imageable words such as ball, spoon, carrot, or giraffe. A detailed study of the words he could and could not retrieve, in tasks of picture naming, tactile naming, and naming to definition, indicated that whereas he was able to retrieve abstract words, he had severe difficulties when trying to retrieve imageable words. The same dissociation also applied for proper names-he could retrieve names of people who have no visual image attached to their representation (such as the son of the biblical Abraham), but could not name people who had a visual image (such as his own son, or Barack Obama). When he tried to produce imageable words, he mainly produced perseverations and empty speech, and some semantic paraphasias. He did not produce perseverations when he tried to retrieve abstract words. This suggests that perseverations may occur when the phonological production system produces a word without proper activation in the semantic lexicon. Nissim evinced a similar dissociation in comprehension-he could understand abstract words and sentences but failed to understand sentences with imageable words, and to match spoken imageable words to pictures or to semantically related imageable words. He was able to understand proverbs with imageable literal meaning but abstract figurative meaning. His comprehension was impaired also in tasks of semantic associations of pictures, pointing to a conceptual, rather than lexical source of the deficit. His visual perception as well as his phonological input and output lexicons and buffers (assessed by auditory lexical decision, word and sentence repetition, and writing to dictation) were intact, supporting a selective conceptual system impairment. He was able to retrieve gestures for objects and pictures he saw, indicating that his access to concepts often sufficed for the activation of the motoric information but did not suffice for access to the entry in the semantic lexicon. These results show that imageable concepts can be selectively impaired, and shed light on the organization of conceptual-semantic system.
Awake surgery between art and science. Part II: language and cognitive mapping
Talacchi, Andrea; Santini, Barbara; Casartelli, Marilena; Monti, Alessia; Capasso, Rita; Miceli, Gabriele
Summary Direct cortical and subcortical stimulation has been claimed to be the gold standard for exploring brain function. In this field, efforts are now being made to move from intraoperative naming-assisted surgical resection towards the use of other language and cognitive tasks. However, before relying on new protocols and new techniques, we need a multi-staged system of evidence (low and high) relating to each step of functional mapping and its clinical validity. In this article we examine the possibilities and limits of brain mapping with the aid of a visual object naming task and various other tasks used to date. The methodological aspects of intraoperative brain mapping, as well as the clinical and operative settings, were discussed in Part I of this review. PMID:24139658
Detecting objects in radiographs for homeland security
NASA Astrophysics Data System (ADS)
Prasad, Lakshman; Snyder, Hans
2005-05-01
We present a general scheme for segmenting a radiographic image into polygons that correspond to visual features. This decomposition provides a vectorized representation that is a high-level description of the image. The polygons correspond to objects or object parts present in the image. This characterization of radiographs allows the direct application of several shape recognition algorithms to identify objects. In this paper we describe the use of constrained Delaunay triangulations as a uniform foundational tool to achieve multiple visual tasks, namely image segmentation, shape decomposition, and parts-based shape matching. Shape decomposition yields parts that serve as tokens representing local shape characteristics. Parts-based shape matching enables the recognition of objects in the presence of occlusions, which commonly occur in radiographs. The polygonal representation of image features affords the efficient design and application of sophisticated geometric filtering methods to detect large-scale structural properties of objects in images. Finally, the representation of radiographs via polygons results in significant reduction of image file sizes and permits the scalable graphical representation of images, along with annotations of detected objects, in the SVG (scalable vector graphics) format that is proposed by the world wide web consortium (W3C). This is a textual representation that can be compressed and encrypted for efficient and secure transmission of information over wireless channels and on the Internet. In particular, our methods described here provide an algorithmic framework for developing image analysis tools for screening cargo at ports of entry for homeland security.
Metric invariance in object recognition: a review and further evidence.
Cooper, E E; Biederman, I; Hummel, J E
1992-06-01
Phenomenologically, human shape recognition appears to be invariant with changes of orientation in depth (up to parts occlusion), position in the visual field, and size. Recent versions of template theories (e.g., Ullman, 1989; Lowe, 1987) assume that these invariances are achieved through the application of transformations such as rotation, translation, and scaling of the image so that it can be matched metrically to a stored template. Presumably, such transformations would require time for their execution. We describe recent priming experiments in which the effects of a prior brief presentation of an image on its subsequent recognition are assessed. The results of these experiments indicate that the invariance is complete: The magnitude of visual priming (as distinct from name or basic level concept priming) is not affected by a change in position, size, orientation in depth, or the particular lines and vertices present in the image, as long as representations of the same components can be activated. An implemented seven layer neural network model (Hummel & Biederman, 1992) that captures these fundamental properties of human object recognition is described. Given a line drawing of an object, the model activates a viewpoint-invariant structural description of the object, specifying its parts and their interrelations. Visual priming is interpreted as a change in the connection weights for the activation of: a) cells, termed geon feature assemblies (GFAs), that conjoin the output of units that represent invariant, independent properties of a single geon and its relations (such as its type, aspect ratio, relations to other geons), or b) a change in the connection weights by which several GFAs activate a cell representing an object.
Moreno-Martínez, F. Javier; Rodríguez-Rojo, Inmaculada C.
2015-01-01
The role of colour in object recognition is controversial; in this study, a critical review of previous studies, as well as a longitudinal study, was conducted. We examined whether colour benefits the ability of Alzheimer's disease (AD) patients and normal controls (NC) when naming items differing in colour diagnosticity: living things (LT) versus nonliving things (NLT). Eleven AD patients were evaluated twice with a temporal interval of 3 years; 26 NC were tested once. The participants performed a naming task (colour and greyscale photographs); the impact of nuisance variables (NVs) and potential ceiling effects were also controlled. Our results showed that (i) colour slightly favoured processing of items with higher colour diagnosticity (i.e., LT) in both groups; (ii) AD patients used colour information similarly to NC, retaining this ability over time; (iii) NVs played a significant role as naming predictors in all the participants, relegating domain to a minor plane; and (iv) category effects (better processing of NLT) were present in both groups. Finally, although patients underwent semantic longitudinal impairment, this was independent of colour deterioration. This finding provides better support to the view that colour is effective at the visual rather than at the semantic level of object processing. PMID:26074675
Moreno-Martínez, F Javier; Rodríguez-Rojo, Inmaculada C
2015-01-01
The role of colour in object recognition is controversial; in this study, a critical review of previous studies, as well as a longitudinal study, was conducted. We examined whether colour benefits the ability of Alzheimer's disease (AD) patients and normal controls (NC) when naming items differing in colour diagnosticity: living things (LT) versus nonliving things (NLT). Eleven AD patients were evaluated twice with a temporal interval of 3 years; 26 NC were tested once. The participants performed a naming task (colour and greyscale photographs); the impact of nuisance variables (NVs) and potential ceiling effects were also controlled. Our results showed that (i) colour slightly favoured processing of items with higher colour diagnosticity (i.e., LT) in both groups; (ii) AD patients used colour information similarly to NC, retaining this ability over time; (iii) NVs played a significant role as naming predictors in all the participants, relegating domain to a minor plane; and (iv) category effects (better processing of NLT) were present in both groups. Finally, although patients underwent semantic longitudinal impairment, this was independent of colour deterioration. This finding provides better support to the view that colour is effective at the visual rather than at the semantic level of object processing.
Electrostimulation mapping of comprehension of auditory and visual words.
Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François
2015-10-01
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Thoma, Volker; Henson, Richard N.
2011-01-01
The effects of attention and object configuration on the neural responses to short-lag visual image repetition were investigated with fMRI. Attention to one of two object images in a prime display was cued spatially. The images were either intact or split vertically; a manipulation that negates the influence of view-based representations. A subsequent single intact probe image was named covertly. Behavioural priming observed as faster button presses was found for attended primes in both intact and split configurations, but only for uncued primes in the intact configuration. In a voxel-wise analysis, fMRI repetition suppression (RS) was observed in a left mid-fusiform region for attended primes, both intact and split, whilst a right intraparietal region showed repetition enhancement (RE) for intact primes, regardless of attention. In a factorial analysis across regions of interest (ROIs) defined from independent localiser contrasts, RS for attended objects in the ventral stream was significantly left-lateralised, whilst repetition effects in ventral and dorsal ROIs correlated with the amount of priming in specific conditions. These fMRI results extend hybrid theories of object recognition, implicating left ventral stream regions in analytic processing (requiring attention), consistent with prior hypotheses about hemispheric specialisation, and implicating dorsal stream regions in holistic processing (independent of attention). PMID:21554967
Object-oriented approach to fast display of electrophysiological data under MS-windows.
Marion-Poll, F
1995-12-01
Microcomputers provide neuroscientists an alternative to a host of laboratory equipment to record and analyze electrophysiological data. Object-oriented programming tools bring an essential link between custom needs for data acquisition and analysis with general software packages. In this paper, we outline the layout of basic objects that display and manipulate electrophysiological data files. Visual inspection of the recordings is a basic requirement of any data analysis software. We present an approach that allows flexible and fast display of large data sets. This approach involves constructing an intermediate representation of the data in order to lower the number of actual points displayed while preserving the aspect of the data. The second group of objects is related to the management of lists of data files. Typical experiments designed to test the biological activity of pharmacological products include scores of files. Data manipulation and analysis are facilitated by creating multi-document objects that include the names of all experiment files. Implementation steps of both objects are described for an MS-Windows hosted application.
Development of Embodied Word Meanings: Sensorimotor Effects in Children’s Lexical Processing
Inkster, Michelle; Wellsby, Michele; Lloyd, Ellen; Pexman, Penny M.
2016-01-01
Previous research showed an effect of words’ rated body–object interaction (BOI) in children’s visual word naming performance, but only in children 8 years of age or older (Wellsby and Pexman, 2014a). In that study, however, BOI was established using adult ratings. Here we collected ratings from a group of parents for children’s BOI experience (child-BOI). We examined effects of words’ child-BOI and also words’ imageability on children’s responses in an auditory word naming task, which is suited to the lexical processing skills of younger children. We tested a group of 54 children aged 6–7 years and a comparison group of 25 adults. Results showed significant effects of both imageability and child-BOI on children’s auditory naming latencies. These results provide evidence that children younger than 8 years of age have richer semantic representations for high imageability and high child-BOI words, consistent with an embodied account of word meaning. PMID:27014129
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Chum, Frank Y.; Gallagher, Suzy; Granier, Martin; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros
1985-01-01
This Working Paper Series entry represents the abstracts and visuals associated with presentations delivered by six USL NASA/RECON research team members at the above named conference. The presentations highlight various aspects of NASA contract activities pursued by the participants as they relate to individual research projects. The titles of the six presentations are as follows: (1) The Specification and Design of a Distributed Workstation; (2) An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval; (3) Critical Comparative Analysis of the Major Commercial IS and R Systems; (4) Design Criteria for a PC-Based Common User Interface to Remote Information Systems; (5) The Design of an Object-Oriented Graphics Interface; and (6) Knowledge-Based Information Retrieval: Techniques and Applications.
Mapping the meanings of novel visual symbols by youth with moderate or severe mental retardation.
Romski, M A; Sevcik, R A; Robinson, B F; Mervis, C B; Bertrand, J
1996-01-01
The word-learning ability of 12 school-age subjects with moderate or severe mental retardation was assessed. Subjects had little or no functional speech and used the System for Augmenting Language with visual-graphic symbols for communication. Their ability to fast map novel symbols revealed whether they possessed the novel name-nameless category (N3C) lexical operating principle. On first exposure, 7 subjects were able to map symbol meanings for novel objects. Follow-up assessments indicated that mappers retained comprehension of some of the novel words for up to delays of 15 days and generalized their knowledge to production. Ability to fast map reliably was related to symbol achievement status. Implications for understanding vocabulary acquisition by youth with mental retardation were discussed.
Object tracking based on harmony search: comparative study
NASA Astrophysics Data System (ADS)
Gao, Ming-Liang; He, Xiao-Hai; Luo, Dai-Sheng; Yu, Yan-Mei
2012-10-01
Visual tracking can be treated as an optimization problem. A new meta-heuristic optimal algorithm, Harmony Search (HS), was first applied to perform visual tracking by Fourie et al. As the authors point out, many subjects are still required in ongoing research. Our work is a continuation of Fourie's study, with four prominent improved variations of HS, namely Improved Harmony Search (IHS), Global-best Harmony Search (GHS), Self-adaptive Harmony Search (SHS) and Differential Harmony Search (DHS) adopted into the tracking system. Their performances are tested and analyzed on multiple challenging video sequences. Experimental results show that IHS is best, with DHS ranking second among the four improved trackers when the iteration number is small. However, the differences between all four reduced gradually, along with the increasing number of iterations.
Multiplicative processes in visual cognition
NASA Astrophysics Data System (ADS)
Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.
2014-03-01
The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.
Reilly, Jamie; Harnish, Stacy; Garcia, Amanda; Hung, Jinyi; Rodriguez, Amy D.; Crosson, Bruce
2014-01-01
Embodied cognition offers an approach to word meaning firmly grounded in action and perception. A strong prediction of embodied cognition is that sensorimotor simulation is a necessary component of lexical-semantic representation. One semantic distinction where motor imagery is likely to play a key role involves the representation of manufactured artifacts. Many questions remain with respect to the scope of embodied cognition. One dominant unresolved issue is the extent to which motor enactment is necessary for representing and generating words with high motor salience. We investigated lesion correlates of manipulable relative to non-manipulable name generation (e.g., name a school supply; name a mountain range) in patients with nonfluent aphasia (N=14). Lesion volumes within motor (BA4) and premotor (BA6) cortices were not predictive of category discrepancies. Lesion symptom mapping linked impairment for manipulable objects to polymodal convergence zones and to projections of the left, primary visual cortex specialized for motion perception (MT/V5+). Lesions to motor and premotor cortex were not predictive of manipulability impairment. This lesion correlation is incompatible with an embodied perspective premised on necessity of motor cortex for the enactment and subsequent production of motor-related words. These findings instead support a graded or ‘soft’ approach to embodied cognition premised on an ancillary role of modality-specific cortical regions in enriching modality-neutral representations. We discuss a dynamic, hybrid approach to the neurobiology of semantic memory integrating both embodied and disembodied components. PMID:24839997
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
Bonin, Patrick; Guillemard-Tsaparina, Diana; Méot, Alain
2013-09-01
We report object-naming and object recognition times collected from Russian native speakers for the colorized version of the Snodgrass and Vanderwart (Journal of Experimental Psychology: Human Learning and Memory 6:174-215, 1980) pictures (Rossion & Pourtois, Perception 33:217-236, 2004). New norms for image variability, body-object interaction [BOI], and subjective frequency collected in Russian, as well as new name agreement scores for the colorized pictures in French, are also reported. In both object-naming and object comprehension times, the name agreement, image agreement, and age-of-acquisition variables made significant independent contributions. Objective word frequency was reliable in object-naming latencies only. The variables of image variability, BOI, and subjective frequency were not significant in either object naming or object comprehension. Finally, imageability was reliable in both tasks. The new norms and object-naming and object recognition times are provided as supplemental materials.
Manasse, N J; Hux, K; Snell, J
2005-08-10
Recalling names in real-world contexts is often difficult for survivors of traumatic brain injury despite successful completion of face-name association training programmes. This small number study utilized a sequential treatment approach in which a traditional training programme preceded real-world training. The traditional training component was identical across programmes: one-on-one intervention using visual imagery and photographs to assist in mastery of face-name associations. The real-world training component compared the effectiveness of three cueing strategies--name restating, phonemic cueing and visual imagery--and was conducted by the actual to-be-named people. Results revealed improved name learning and use by the participants regardless of cueing strategy. After treatment targeting six names, four of five participants consistently used two or more names spontaneously and consistently knew three or more names in response to questioning. In addition to documenting the effectiveness of real-world treatment paradigms, the findings call into question the necessity for preliminary traditional intervention.
Cortical reinstatement and the confidence and accuracy of source memory.
Thakral, Preston P; Wang, Tracy H; Rugg, Michael D
2015-04-01
Cortical reinstatement refers to the overlap between neural activity elicited during the encoding and the subsequent retrieval of an episode, and is held to reflect retrieved mnemonic content. Previous findings have demonstrated that reinstatement effects reflect the quality of retrieved episodic information as this is operationalized by the accuracy of source memory judgments. The present functional magnetic resonance imaging (fMRI) study investigated whether reinstatement-related activity also co-varies with the confidence of accurate source judgments. Participants studied pictures of objects along with their visual or spoken names. At test, they first discriminated between studied and unstudied pictures and then, for each picture judged as studied, they also judged whether it had been paired with a visual or auditory name, using a three-point confidence scale. Accuracy of source memory judgments- and hence the quality of the source-specifying information--was greater for high than for low confidence judgments. Modality-selective retrieval-related activity (reinstatement effects) also co-varied with the confidence of the corresponding source memory judgment. The findings indicate that the quality of the information supporting accurate judgments of source memory is indexed by the relative magnitude of content-selective, retrieval-related neural activity. Copyright © 2015 Elsevier Inc. All rights reserved.
Kobayashi, Maya Shiho; Haynes, Charles W; Macaruso, Paul; Hook, Pamela E; Kato, Junko
2005-06-01
This study examined the extent to which mora deletion (phonological analysis), nonword repetition (phonological memory), rapid automatized naming (RAN), and visual search abilities predict reading in Japanese kindergartners and first graders. Analogous abilities have been identified as important predictors of reading skills in alphabetic languages like English. In contrast to English, which is based on grapheme-phoneme relationships, the primary components of Japanese orthography are two syllabaries-hiragana and katakana (collectively termed "kana")-and a system of morphosyllabic symbols (kanji). Three RAN tasks (numbers, objects, syllabary symbols [hiragana]) were used with kindergartners, with an additional kanji RAN task included for first graders. Reading measures included accuracy and speed of passage reading for kindergartners and first graders, and reading comprehension for first graders. In kindergartners, hiragana RAN and number RAN were the only significant predictors of reading accuracy and speed. In first graders, kanji RAN and hiragana RAN predicted reading speed, whereas accuracy was predicted by mora deletion. Reading comprehension was predicted by kanji RAN, mora deletion, and nonword repetition. Although number RAN did not contribute unique variance to any reading measure, it correlated highly with kanji RAN. Implications of these findings for research and practice are discussed.
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.
2018-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A
2017-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.
Conversation and convention: enduring influences on name choice for common objects.
Malt, Barbara C; Sloman, Steven A
2004-12-01
The name chosen for an object is influenced by both short-term history (e.g., speaker-addressee pacts) and long-term history (e.g., the language's naming pattern for the domain). But these influences must somehow be linked. We propose that names adopted through speaker-addressee collaboration have influences that carry beyond the original context. To test this hypothesis, we adapted the standard referential communication task. The first director of each matching session was a confederate who introduced one of two possible names for each object. The director role then rotated to naive participants. The participants later rated name preference for the introduced and alternative names for each object. They also rated object typicality or similarity to each named category. The name that was initially introduced influenced later name use and preference, even for participants who had not heard the name from the original director. Typicality and similarity showed lesser effects from the names originally introduced. Name associations built in one context appear to influence retrieval and use of names in other contexts, but they have reduced impact on nonlinguistic object knowledge. These results support the notion that stable conventions for object names within a linguistic community may arise from local interactions, and they demonstrate how different populations of speakers may come to have a shared understanding of objects' nonlinguistic properties but different naming patterns.
Qualitative Examination of Children's Naming Skills through Test Adaptations.
ERIC Educational Resources Information Center
Fried-Oken, Melanie
1987-01-01
The Double Administration Naming Technique assists clinicians in obtaining qualitative information about a client's visual confrontation naming skills through administration of a standard naming test; readministration of the same test; identification of single and double errors; cuing for double naming errors; and qualitative analysis of naming…
Border collie comprehends object names as verbal referents.
Pilley, John W; Reid, Alliston K
2011-02-01
Four experiments investigated the ability of a border collie (Chaser) to acquire receptive language skills. Experiment 1 demonstrated that Chaser learned and retained, over a 3-year period of intensive training, the proper-noun names of 1022 objects. Experiment 2 presented random pair-wise combinations of three commands and three names, and demonstrated that she understood the separate meanings of proper-noun names and commands. Chaser understood that names refer to objects, independent of the behavior directed toward those objects. Experiment 3 demonstrated Chaser's ability to learn three common nouns--words that represent categories. Chaser demonstrated one-to-many (common noun) and many-to-one (multiple-name) name-object mappings. Experiment 4 demonstrated Chaser's ability to learn words by inferential reasoning by exclusion--inferring the name of an object based on its novelty among familiar objects that already had names. Together, these studies indicate that Chaser acquired referential understanding of nouns, an ability normally attributed to children, which included: (a) awareness that words may refer to objects, (b) awareness of verbal cues that map words upon the object referent, and (c) awareness that names may refer to unique objects or categories of objects, independent of the behaviors directed toward those objects. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kobayashi, Maya Shiho; Haynes, Charles W.; Macaruso, Paul; Hook, Pamela E.; Kato, Junko
2005-01-01
This study examined the extent to which mora deletion (phonological analysis), nonword repetition (phonological memory), rapid automatized naming (RAN), and visual search abilities predict reading in Japanese kindergartners and first graders. Analogous abilities have been identified as important predictors of reading skills in alphabetic languages…
Parallel Processing of Objects in a Naming Task
ERIC Educational Resources Information Center
Meyer, Antje S.; Ouellet, Marc; Hacker, Christine
2008-01-01
The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown…
2015-06-01
Visualization, Graph Navigation, Visual Literacy 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF...3 2.3. Visual Literacy ...obscured and individual edges that could be traversed before bundled are now completely lost among the bundled edges. 2.3. Visual Literacy Visual
Foreground-background segmentation and attention: a change blindness study.
Mazza, Veronica; Turatto, Massimo; Umiltà, Carlo
2005-01-01
One of the most debated questions in visual attention research is what factors affect the deployment of attention in the visual scene? Segmentation processes are influential factors, providing candidate objects for further attentional selection, and the relevant literature has concentrated on how figure-ground segmentation mechanisms influence visual attention. However, another crucial process, namely foreground-background segmentation, seems to have been neglected. By using a change blindness paradigm, we explored whether attention is preferentially allocated to the foreground elements or to the background ones. The results indicated that unless attention was voluntarily deployed to the background, large changes in the color of its elements remained unnoticed. In contrast, minor changes in the foreground elements were promptly reported. Differences in change blindness between the two regions of the display indicate that attention is, by default, biased toward the foreground elements. This also supports the phenomenal observations made by Gestaltists, who demonstrated the greater salience of the foreground than the background.
Medial perirhinal cortex disambiguates confusable objects
Tyler, Lorraine K.; Monsch, Andreas U.; Taylor, Kirsten I.
2012-01-01
Our brain disambiguates the objects in our cluttered visual world seemingly effortlessly, enabling us to understand their significance and to act appropriately. The role of anteromedial temporal structures in this process, particularly the perirhinal cortex, is highly controversial. In some accounts, the perirhinal cortex is necessary for differentiating between perceptually and semantically confusable objects. Other models claim that the perirhinal cortex neither disambiguates perceptually confusable objects nor plays a unique role in semantic processing. One major hurdle to resolving this central debate is the fact that brain damage in human patients typically encompasses large portions of the anteromedial temporal lobe, such that the identification of individual substructures and precise neuroanatomical locus of the functional impairments has been difficult. We tested these competing accounts in patients with Alzheimer’s disease with varying degrees of atrophy in anteromedial structures, including the perirhinal cortex. To assess the functional contribution of each anteromedial temporal region separately, we used a detailed region of interest approach. From each participant, we obtained magnetic resonance imaging scans and behavioural data from a picture naming task that contrasted naming performance with living and non-living things as a way of manipulating perceptual and semantic confusability; living things are more similar to one another than non-living things, which have more distinctive features. We manually traced neuroanatomical regions of interest on native-space cortical surface reconstructions to obtain mean thickness estimates for the lateral and medial perirhinal cortex and entorhinal cortex. Mean cortical thickness in each region of interest, and hippocampal volume, were submitted to regression analyses predicting naming performance. Importantly, atrophy of the medial perirhinal cortex, but not lateral perirhinal cortex, entorhinal cortex or hippocampus, significantly predicted naming performance on living relative to non-living things. These findings indicate that one specific anteromedial temporal lobe region—the medial perirhinal cortex—is necessary for the disambiguation of perceptually and semantically confusable objects. Taken together, these results support a hierarchical account of object processing, whereby the perirhinal cortex at the apex of the ventral object processing system is required to bind properties of not just perceptually, but also semantically confusable objects together, enabling their disambiguation from other similar objects and thus comprehension. Significantly, this model combining a hierarchical object processing architecture with a semantic feature statistic account explains why category-specific semantic impairments for living things are associated with anteromedial temporal lobe damage, and pinpoints the root of this syndrome to perirhinal cortex damage. PMID:23250887
Dissociating verbal and nonverbal audiovisual object processing.
Hocking, Julia; Price, Cathy J
2009-02-01
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Detailed 3D representations for object recognition and modeling.
Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad
2013-11-01
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
Category specific dysnomia after thalamic infarction: a case-control study.
Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli
2005-01-01
Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.
Cammarota, M; Huppes, V; Gaia, S; Degoulet, P
1998-01-01
The development of Health Information Systems is widely determined by the establishment of the underlying information models. An Object-Oriented Matrix Model (OOMM) is described which target is to facilitate the integration of the overall health system. The model is based on information modules named micro-databases that are structured in a three-dimensional network: planning, health structures and information systems. The modelling tool has been developed as a layer on top of a relational database system. A visual browser facilitates the development and maintenance of the information model. The modelling approach has been applied to the Brasilia University Hospital since 1991. The extension of the modelling approach to the Brasilia regional health system is considered.
Impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease patients.
Simoes Loureiro, Isabelle; Lefebvre, Laurent
2015-01-01
The aim of this study was to generalize the positive impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease (AD) patients. In practice, the naming skills of healthy elderly persons improve when additional sensory signals are included. The hypothesis of this study was that the same influence would be observable in AD patients. Sixty elderly patients separated into three groups (healthy subjects, stage 1 AD patients, and stage 2 AD patients) were tested with a battery of naming tasks comprising three different modalities: a visual modality, an auditory modality, and a visual and auditory modality (bimodality). Our results reveal the positive influence of bimodality on the accuracy with which bimodal items are named (when compared with unimodal items) and their latency (when compared with unimodal auditory items). These results suggest that multisensory enrichment can improve lexical retrieval in AD patients.
Rapid Naming in Brazilian Students with Dyslexia and Attention Deficit Hyperactivity Disorder
Alves, Luciana Mendonça; Siqueira, Cláudia M.; Ferreira, Maria do Carmo Mangelli; Alves, Juliana Flores Mendonça; Lodi, Débora F.; Bicalho, Lorena; Celeste, Letícia C.
2016-01-01
Introduction: The effective development of reading and writing skills requires the concerted action of several abilities, one of which is phonological processing. One of the main components of phonological processing is rapid automatized naming (RAN)—the ability to identify and recognize a given item by the activation and concomitant articulation of its name. Objective: To assess the RAN performance of schoolchildren with dyslexia and attention deficit hyperactivity disorder (ADHD) compared with their peers. Methods: In total, 70 schoolchildren aged between 8 and 11 years participated in the study. Of these, 16 children had a multiprofessional diagnosis of ADHD while 14 were diagnosed with dyslexia. Matched with these groups, 40 schoolchildren with no history of developmental impairments were also evaluated. The RAN test was administered to assess the length of time required to name a series of familiar visual stimuli. The statistical analysis was conducted using measures of descriptive statistics and the 2-sample t-test at the 5% significance level. Results: The performance of the group with dyslexia was inferior to that of the control group in all tasks and the ADHD group had inferior performance for color and letters-naming tasks. The schoolchildren with dyslexia and those with ADHD showed very similar response times. Age was an important variable to be analyzed separately. As they aged, children with typical language development had fast answers on colors and digits tasks while children with dyslexia or ADHD did not show improvement with age. Conclusions: The schoolchildren with dyslexia took longer to complete all tasks and ADHD took longer to complete digits and objects tasks in comparison to their peers with typical development. This ability tended to improve with age, which was not the case, however, with schoolchildren who had ADHD or dyslexia. PMID:26858672
Wang, Xin; Deng, Zhongliang
2017-01-01
In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. PMID:28677635
Multiscale visual quality assessment for cluster analysis with self-organizing maps
NASA Astrophysics Data System (ADS)
Bernard, Jürgen; von Landesberger, Tatiana; Bremm, Sebastian; Schreck, Tobias
2011-01-01
Cluster analysis is an important data mining technique for analyzing large amounts of data, reducing many objects to a limited number of clusters. Cluster visualization techniques aim at supporting the user in better understanding the characteristics and relationships among the found clusters. While promising approaches to visual cluster analysis already exist, these usually fall short of incorporating the quality of the obtained clustering results. However, due to the nature of the clustering process, quality plays an important aspect, as for most practical data sets, typically many different clusterings are possible. Being aware of clustering quality is important to judge the expressiveness of a given cluster visualization, or to adjust the clustering process with refined parameters, among others. In this work, we present an encompassing suite of visual tools for quality assessment of an important visual cluster algorithm, namely, the Self-Organizing Map (SOM) technique. We define, measure, and visualize the notion of SOM cluster quality along a hierarchy of cluster abstractions. The quality abstractions range from simple scalar-valued quality scores up to the structural comparison of a given SOM clustering with output of additional supportive clustering methods. The suite of methods allows the user to assess the SOM quality on the appropriate abstraction level, and arrive at improved clustering results. We implement our tools in an integrated system, apply it on experimental data sets, and show its applicability.
Real-time processing in picture naming in adults who stutter: ERP evidence
Maxfield, Nathan D.; Morris, Kalie; Frisch, Stefan A.; Morphew, Kathryn; Constantine, Joseph L.
2014-01-01
Objective The aim was to compare real-time language/cognitive processing in picture naming in adults who stutter (AWS) versus typically-fluent adults (TFA). Methods Participants named pictures preceded by masked prime words. Primes and target picture labels were Identical or mismatched. Priming effects on naming and picture-elicited ERP activity were analyzed. Vocabulary knowledge correlations with these measures were assessed. Results Priming improved naming RTs and accuracy in both groups. RTs were longer for AWS, and correlated positively with receptive vocabulary in TFA. Electrophysiologically, posterior-P1 amplitude negatively correlated with expressive vocabulary in TFA versus receptive vocabulary in AWS. Frontal/temporal-P1 amplitude correlated positively with expressive vocabulary in AWS. Identity priming enhanced frontal/posterior-N2 amplitude in both groups, and attenuated P280 amplitude in AWS. N400 priming was topographically-restricted in AWS. Conclusions Results suggest that conceptual knowledge was perceptually-grounded in expressive vocabulary in TFA versus receptive vocabulary in AWS. Poorer expressive vocabulary in AWS was potentially associated with greater suppression of irrelevant conceptual information. Priming enhanced N2-indexed cognitive control and visual attention in both groups. P280-indexed focal attention attenuated with priming in AWS only. Topographically-restricted N400 priming suggests that lemma/word form connections were weaker in AWS. Significance Real-time language/cognitive processing in picture naming operates differently in AWS. PMID:24910149
Wu, Helen C; Nagasawa, Tetsuro; Brown, Erik C; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi
2011-10-01
We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Both tasks commonly elicited gamma-augmentation (maximally at 80-100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. The present study increases our understanding of the visual-language pathways. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Jongman, Suzanne R; Roelofs, Ardi; Scheper, Annette R; Meyer, Antje S
2017-05-01
Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality. To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills. Groups of 7-9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs). Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups. These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups. © 2016 Royal College of Speech and Language Therapists.
ERIC Educational Resources Information Center
Malpass, Debra; Meyer, Antje S.
2010-01-01
The goal of the study was to examine whether speakers naming pairs of objects would retrieve the names of the objects in parallel or in sequence. To this end, we recorded the speakers' eye movements and determined whether the difficulty of retrieving the name of the 2nd object affected the duration of the gazes to the 1st object. Two experiments,…
A selective deficit in imageable concepts: a window to the organization of the conceptual system
Gvion, Aviah; Friedmann, Naama
2013-01-01
Nissim, a 64 years old Hebrew-speaking man who sustained an ischemic infarct in the left occipital lobe, exhibited an intriguing pattern. He could hold a deep and fluent conversation about abstract and complex issues, such as the social risks in unemployment, but failed to retrieve imageable words such as ball, spoon, carrot, or giraffe. A detailed study of the words he could and could not retrieve, in tasks of picture naming, tactile naming, and naming to definition, indicated that whereas he was able to retrieve abstract words, he had severe difficulties when trying to retrieve imageable words. The same dissociation also applied for proper names—he could retrieve names of people who have no visual image attached to their representation (such as the son of the biblical Abraham), but could not name people who had a visual image (such as his own son, or Barack Obama). When he tried to produce imageable words, he mainly produced perseverations and empty speech, and some semantic paraphasias. He did not produce perseverations when he tried to retrieve abstract words. This suggests that perseverations may occur when the phonological production system produces a word without proper activation in the semantic lexicon. Nissim evinced a similar dissociation in comprehension—he could understand abstract words and sentences but failed to understand sentences with imageable words, and to match spoken imageable words to pictures or to semantically related imageable words. He was able to understand proverbs with imageable literal meaning but abstract figurative meaning. His comprehension was impaired also in tasks of semantic associations of pictures, pointing to a conceptual, rather than lexical source of the deficit. His visual perception as well as his phonological input and output lexicons and buffers (assessed by auditory lexical decision, word and sentence repetition, and writing to dictation) were intact, supporting a selective conceptual system impairment. He was able to retrieve gestures for objects and pictures he saw, indicating that his access to concepts often sufficed for the activation of the motoric information but did not suffice for access to the entry in the semantic lexicon. These results show that imageable concepts can be selectively impaired, and shed light on the organization of conceptual-semantic system. PMID:23785321
Object-processing neural efficiency differentiates object from spatial visualizers.
Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria
2008-11-19
The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.
Visual saliency detection based on modeling the spatial Gaussianity
NASA Astrophysics Data System (ADS)
Ju, Hongbin
2015-04-01
In this paper, a novel salient object detection method based on modeling the spatial anomalies is presented. The proposed framework is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous objects among complex background. It is supposed that a natural image can be seen as a combination of some similar or dissimilar basic patches, and there is a direct relationship between its saliency and anomaly. Some patches share high degree of similarity and have a vast number of quantity. They usually make up the background of an image. On the other hand, some patches present strong rarity and specificity. We name these patches "anomalies". Generally, anomalous patch is a reflection of the edge or some special colors and textures in an image, and these pattern cannot be well "explained" by their surroundings. Human eyes show great interests in these anomalous patterns, and will automatically pick out the anomalous parts of an image as the salient regions. To better evaluate the anomaly degree of the basic patches and exploit their nonlinear statistical characteristics, a multivariate Gaussian distribution saliency evaluation model is proposed. In this way, objects with anomalous patterns usually appear as the outliers in the Gaussian distribution, and we identify these anomalous objects as salient ones. Experiments are conducted on the well-known MSRA saliency detection dataset. Compared with other recent developed visual saliency detection methods, our method suggests significant advantages.
Word learning and the cerebral hemispheres: from serial to parallel processing of written words
Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura
2009-01-01
Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140
Verifying visual properties in sentence verification facilitates picture recognition memory.
Pecher, Diane; Zanolie, Kiki; Zeelenberg, René
2007-01-01
According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.
Taylor, Kirsten I.; Devereux, Barry J.; Acres, Kadia; Randall, Billi; Tyler, Lorraine K.
2013-01-01
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. PMID:22137770
A linked GeoData map for enabling information access
Powell, Logan J.; Varanka, Dalia E.
2018-01-10
OverviewThe Geospatial Semantic Web (GSW) is an emerging technology that uses the Internet for more effective knowledge engineering and information extraction. Among the aims of the GSW are to structure the semantic specifications of data to reduce ambiguity and to link those data more efficiently. The data are stored as triples, the basic data unit in graph databases, which are similar to the vector data model of geographic information systems (GIS); that is, a node-edge-node model that forms a graph of semantically related information. The GSW is supported by emerging technologies such as linked geospatial data, described below, that enable it to store and manage geographical data that require new cartographic methods for visualization. This report describes a map that can interact with linked geospatial data using a simulation of a data query approach called the browsable graph to find information that is semantically related to a subject of interest, visualized using the Data Driven Documents (D3) library. Such a semantically enabled map functions as a map knowledge base (MKB) (Varanka and Usery, 2017).A MKB differs from a database in an important way. The central element of a triple, alternatively called the edge or property, is composed of a logic formalization that structures the relation between the first and third parts, the nodes or objects. Node-edge-node represents the graphic form of the triple, and the subject-property-object terms represent the data structure. Object classes connect to build a federated graph, similar to a network in visual form. Because the triple property is a logical statement (a predicate), the data graph represents logical propositions or assertions accepted to be true about the subject matter. These logical formalizations can be manipulated to calculate new triples, representing inferred logical assertions, from the existing data.To demonstrate a MKB system, a technical proof-of-concept is developed that uses geographically attributed Resource Description Framework (RDF) serializations of linked data for mapping. The proof-of-concept focuses on accessing triple data from visual elements of a geographic map as the interface to the MKB. The map interface is embedded with other essential functions such as SPARQL Protocol and RDF Query Language (SPARQL) data query endpoint services and reasoning capabilities of Apache Marmotta (Apache Software Foundation, 2017). An RDF database of the Geographic Names Information System (GNIS), which contains official names of domestic feature in the United States, was linked to a county data layer from The National Map of the U.S. Geological Survey. The county data are part of a broader Government Units theme offered to the public as Esri shapefiles. The shapefile used to draw the map itself was converted to a geographic-oriented JavaScript Object Notation (JSON) (GeoJSON) format and linked through various properties with a linked geodata version of the GNIS database called “GNIS–LD” (Butler and others, 2016; B. Regalia and others, University of California-Santa Barbara, written commun., 2017). The GNIS–LD files originated in Terse RDF Triple Language (Turtle) format but were converted to a JSON format specialized in linked data, “JSON–LD” (Beckett and Berners-Lee, 2011; Sorny and others, 2014). The GNIS–LD database is composed of roughly three predominant triple data graphs: Features, Names, and History. The graphs include a set of namespace prefixes used by each of the attributes. Predefining the prefixes made the conversion to the JSON–LD format simple to complete because Turtle and JSON–LD are variant specifications of the basic RDF concept.To convert a shapefile into GeoJSON format to capture the geospatial coordinate geometry objects, an online converter, Mapshaper, was used (Bloch, 2013). To convert the Turtle files, a custom converter written in Java reconstructs the files by parsing each grouping of attributes belonging to one subject and pasting the data into a new file that follows the syntax of JSON–LD. Additionally, the Features file contained its own set of geometries, which was exported into a separate JSON–LD file along with its elevation value to form a fourth file, named “features-geo.json.” Extracted data from external files can be represented in HyperText Markup Language (HTML) path objects. The goal was to import multiple JSON–LD files using this approach.
"Did you call me?" 5-month-old infants own name guides their attention.
Parise, Eugenio; Friederici, Angela D; Striano, Tricia
2010-12-03
An infant's own name is a unique social cue. Infants are sensitive to their own name by 4 months of age, but whether they use their names as a social cue is unknown. Electroencephalogram (EEG) was measured as infants heard their own name or stranger's names and while looking at novel objects. Event related brain potentials (ERPs) in response to names revealed that infants differentiate their own name from stranger names from the first phoneme. The amplitude of the ERPs to objects indicated that infants attended more to objects after hearing their own names compared to another name. Thus, by 5 months of age infants not only detect their name, but also use it as a social cue to guide their attention to events and objects in the world.
ERIC Educational Resources Information Center
Kruk, Richard S.; Luther Ruban, Cassia
2018-01-01
Visual processes in Grade 1 were examined for their predictive influences in nonalphanumeric and alphanumeric rapid naming (RAN) in 51 poor early and 69 typical readers. In a lagged design, children were followed longitudinally from Grade 1 to Grade 3 over 5 testing occasions. RAN outcomes in early Grade 2 were predicted by speeded and nonspeeded…
ERIC Educational Resources Information Center
Kamijo, Haruo; Morii, Shingo; Yamaguchi, Wataru; Toyooka, Naoki; Tada-Umezaki, Masahito; Hirobayashi, Shigeki
2016-01-01
Various tactile methods, such as Braille, have been employed to enhance the recognition ability of chemical structures by individuals with visual disabilities. However, it is unknown whether reading aloud the names of chemical compounds would be effective in this regard. There are no systems currently available using an audio component to assist…
ERIC Educational Resources Information Center
Schotter, Elizabeth R.; Ferreira, Victor S.; Rayner, Keith
2013-01-01
Do we access information from any object we can see, or do we access information only from objects that we intend to name? In 3 experiments using a modified multiple object naming paradigm, subjects were required to name several objects in succession when previews appeared briefly and simultaneously in the same location as the target as well as at…
Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S
2016-12-01
Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated temporal lobe structures, which are resected during ATLR, more frequently than did verbal fluency. Controlling for auditory and visual input resulted in more left-lateralised activations. We hypothesise that these paradigms may be more predictive of postoperative language decline than verbal fluency fMRI. Copyright © 2016 Elsevier B.V. All rights reserved.
Learning What to Remember: Vocabulary Knowledge and Children's Memory for Object Names and Features
ERIC Educational Resources Information Center
Perry, Lynn K.; Axelsson, Emma L.; Horst, Jessica S.
2016-01-01
Although young children can map a novel name to a novel object, it remains unclear what they actually remember about objects when they initially make such a name-object association. In the current study we investigated (1) what children remembered after they were initially introduced to name-object associations and (2) how their vocabulary size…
Naming-Speed Processes, Timing, and Reading: A Conceptual Review.
ERIC Educational Resources Information Center
Wolf, Maryanne; Bowers, Patricia Greig; Biddle, Kathleen
2000-01-01
This article reviews evidence for seven central questions about the role of naming-speed deficits in developmental reading disabilities. Cross-sectional, longitudinal, and cross-linguistic research on naming-speed processes, timing processes, and reading is presented. An evolving model of visual naming illustrates areas of difference and areas of…
“Did You Call Me?” 5-Month-Old Infants Own Name Guides Their Attention
Parise, Eugenio; Friederici, Angela D.; Striano, Tricia
2010-01-01
An infant's own name is a unique social cue. Infants are sensitive to their own name by 4 months of age, but whether they use their names as a social cue is unknown. Electroencephalogram (EEG) was measured as infants heard their own name or stranger's names and while looking at novel objects. Event related brain potentials (ERPs) in response to names revealed that infants differentiate their own name from stranger names from the first phoneme. The amplitude of the ERPs to objects indicated that infants attended more to objects after hearing their own names compared to another name. Thus, by 5 months of age infants not only detect their name, but also use it as a social cue to guide their attention to events and objects in the world. PMID:21151971
Wayfinding concept in University of Brawijaya
NASA Astrophysics Data System (ADS)
Firjatullah, H.; Kurniawan, E. B.; Purnamasari, W. D.
2017-06-01
Wayfinding is an activity related to the orientation and motion from first point to point of destination. Benefits of wayfinding in the area of education, namely as a means of helping direct a person to a destination so as to reduce the lostness and assist users in remembering the way through the role of space and objects wayfinding. Around 48% new students of University of Brawijaya (UB) 2015 that was ever lost when entering the campus. This shows the need for wayfinding concept to someone who was unfamiliar with the surrounding area as freshmen. This study uses mental map analysis to find the objects hierarchy wayfinding determination based on the respondents and the space syntax (visual graph analysis) as a hierarchy based on the determination of the configuration space. The overlay result say that hierarchy between both of analysis shows there are several objects which are potentially in wayfinding process on the campus of UB. The concept of wayfinding generate different treatment of the selected object based of wayfinding classification, both in maintaining the function of the object in space and develop the potential of the object wayfinding.
Age effect in generating mental images of buildings but not common objects.
Piccardi, L; Nori, R; Palermo, L; Guariglia, C; Giusberti, F
2015-08-18
Imagining a familiar environment is different from imagining an environmental map and clinical evidence demonstrated the existence of double dissociations in brain-damaged patients due to the contents of mental images. Here, we assessed a large sample of young and old participants by considering their ability to generate different kinds of mental images, namely, buildings or common objects. As buildings are environmental stimuli that have an important role in human navigation, we expected that elderly participants would have greater difficulty in generating images of buildings than common objects. We found that young and older participants differed in generating both buildings and common objects. For young participants there were no differences between buildings and common objects, but older participants found easier to generate common objects than buildings. Buildings are a special type of visual stimuli because in urban environments they are commonly used as landmarks for navigational purposes. Considering that topographical orientation is one of the abilities mostly affected in normal and pathological aging, the present data throw some light on the impaired processes underlying human navigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Semantic and visual memory codes in learning disabled readers.
Swanson, H L
1984-02-01
Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.
Chocolate smells pink and stripy: Exploring olfactory-visual synesthesia
Russell, Alex; Stevenson, Richard J.; Rich, Anina N.
2015-01-01
Odors are often difficult to identify, and can be perceived either via the nose or mouth (“flavor”; not usually perceived as a “smell”). These features provide a unique opportunity to contrast conceptual and perceptual accounts of synesthesia. We presented six olfactory-visual synesthetes with a range of odorants. They tried to identify each smell, evaluate its attributes and illustrate their elicited visual experience. Judges rated the similarity of each synesthetes’ illustrations over time (test-retest reliability). Synesthetic images were most similar from the same odor named consistently, but even inconsistently named same odors generated more similar images than different odors. This was driven by hedonic similarity. Odors presented as flavors only resulted in similar images when consistently named. Thus, the primary factor in generating a reliable synesthetic image is the name, with some influence of odor hedonics. Hedonics are a basic form of semantic knowledge, making this consistent with a conceptual basis for synaesthetic links. PMID:25895152
Misonou, Kaori; Ishiai, Sumio; Seki, Keiko; Koyama, Yasumasa; Nakano, Naomi
2004-06-01
Twelve patients with left unilateral spatial neglect were examined with a newly devised "coloured line bisection task". They were presented with a horizontal line printed in blue on one side and in red on the other side; the proportions of the blue and red segments were varied. Immediately after placement of the subjective midpoint, the line was concealed and the patients were asked to name the colours of the right and left ends. Five patients who identified the left-end colour almost correctly had no visual field defect, while the other seven whose colour naming was impaired on the left side had left visual field defect. The rightward bisection errors were similarly distributed in the fair and poor colour-naming patients except for two patients from the latter group. The lesions of the fair colour-naming patients spared the lingual and fusiform gyri, which are known to be engaged in colour processing. Patients with neglect whose visual field is preserved may neglect the leftward extension of a line but not the colour in the neglected space. The poor colour-naming patients frequently failed to name the left-end colour that appeared to the left of their subjective midpoint, which indicates that they hardly searched leftward beyond that point. In such trials, they reported that the left end had the same colour as the right end. The results suggest that in patients with neglect and left visual field defect, both the leftward extent and the colour of a line may be represented on the basis of the information from the attended right segment.
McQueen, James M; Huettig, Falk
2014-01-01
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.
[French norms of imagery for pictures, for concrete and abstract words].
Robin, Frédérique
2006-09-01
This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.
Canonical Visual Size for Real-World Objects
Konkle, Talia; Oliva, Aude
2012-01-01
Real-world objects can be viewed at a range of distances and thus can be experienced at a range of visual angles within the visual field. Given the large amount of visual size variation possible when observing objects, we examined how internal object representations represent visual size information. In a series of experiments which required observers to access existing object knowledge, we observed that real-world objects have a consistent visual size at which they are drawn, imagined, and preferentially viewed. Importantly, this visual size is proportional to the logarithm of the assumed size of the object in the world, and is best characterized not as a fixed visual angle, but by the ratio of the object and the frame of space around it. Akin to the previous literature on canonical perspective, we term this consistent visual size information the canonical visual size. PMID:20822298
Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.
Müller, Matthias M; Trautmann, Mireille; Keitel, Christian
2016-04-01
Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.
ComVisMD - compact visualization of multidimensional data: experimenting with cricket players data
NASA Astrophysics Data System (ADS)
Dandin, Shridhar B.; Ducassé, Mireille
2018-03-01
Database information is multidimensional and often displayed in tabular format (row/column display). Presented in aggregated form, multidimensional data can be used to analyze the records or objects. Online Analytical database Processing (OLAP) proposes mechanisms to display multidimensional data in aggregated forms. A choropleth map is a thematic map in which areas are colored in proportion to the measurement of a statistical variable being displayed, such as population density. They are used mostly for compact graphical representation of geographical information. We propose a system, ComVisMD inspired by choropleth map and the OLAP cube to visualize multidimensional data in a compact way. ComVisMD displays multidimensional data like OLAP Cube, where we are mapping an attribute a (first dimension, e.g. year started playing cricket) in vertical direction, object coloring based on b (second dimension, e.g. batting average), mapping varying-size circles based on attribute c (third dimension, e.g. highest score), mapping numbers based on attribute d (fourth dimension, e.g. matches played). We illustrate our approach on cricket players data, namely on two tables Country and Player. They have a large number of rows and columns: 246 rows and 17 columns for players of one country. ComVisMD’s visualization reduces the size of the tabular display by a factor of about 4, allowing users to grasp more information at a time than the bare table display.
... other symptoms with the vision loss, seek medical attention right away. Alternative Names Transient monocular blindness; Transient monocular visual loss; TMLV; Transient monocular visual loss; Transient binocular ...
Shape and color naming are inherently asymmetrical: Evidence from practice-based interference.
Protopapas, Athanassios; Markatou, Artemis; Samaras, Evangelos; Piokos, Andreas
2017-01-01
Stroop interference is characterized by strong asymmetry between word and color naming such that the former is faster and interferes with the latter but not vice versa. This asymmetry is attributed to differential experience with naming in the two dimensions, i.e., words and colors. Here we show that training on visual-verbal paired associate tasks equivalent to color and shape naming, not involving word reading, leads to strongly asymmetric interference patterns. In two experiments adults practiced naming colors and shapes, one dimension more extensively (10days) than the other (2days), depending on group assignment. One experiment used novel shapes (ideograms) and the other familiar geometric shapes, associated with nonsense syllables. In a third experiment participants practiced naming either colors or shapes using cross-category shape and color names, respectively, for 12days. Across experiments, despite equal training of the two groups in naming the two different dimensions, color naming was strongly affected by shape even after extensive practice, whereas shape naming was resistant to interference. To reconcile these findings with theoretical accounts of interference, reading may be conceptualized as involving visual-verbal associations akin to shape naming. An inherent or early-developing advantage for naming shapes may provide an evolutionary substrate for the invention and development of reading. Copyright © 2016 Elsevier B.V. All rights reserved.
Selecting and perceiving multiple visual objects
Xu, Yaoda; Chun, Marvin M.
2010-01-01
To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882
2016-11-09
Total Number: Sub Contractors (DD882) Names of Personnel receiving masters degrees Names of personnel receiving PHDs Names of other research staff...Broadcom 5720 QP 1Gb Network Daughter Card (2) Intel Xeon E5-2680 v3 2.5GHz, 30M Cache, 9.60GT/s QPI, Turbo, HT , 12C/24T (120W...Broadcom 5720 QP 1Gb Network Daughter Card (2) Intel Xeon E5-2680 v3 2.5GHz, 30M Cache, 9.60GT/s QPI, Turbo, HT , 12C/24T (120W
[A case of left hand agraphia without callosal apraxia].
Tsuzuki, S; Indo, T; Takahashi, A
1989-01-01
A 65-year-old male who had agraphia confined to the left hand was reported. The patient was admitted to the neurological department of Kasugai city Hospital because of suddenly-developed mild right-sided hemiparesis with central facial palsy. Computerized tomography of the brain was performed 2 and 14 days after admission. As a result, low-density regions were found in the left cingulate and medial frontal gyri and the trunk of the corpus callosum. Magnetic resonance imaging of the saggital plane more clearly visualized a localized infarction affecting both the trunk of the corpus callosum and its leftward outflow. Neuropsychological findings of the patient were summarized as follows. 1) He had no difficulty in any of the actual use of object, copying the manipulation of objects, and proper use of objects according to verbal commands. 2) With the eyes closed, he could correctly name the objects handed over to the right hand, while he could do only 15 out of 20 objects handed over to the left hand. However, whichever hand an object was handed to, he could explain how to use the object. 3) He could write Hiragana, Katakana, and Kanji correctly with his right hand in accordance with verbal commands, whereas with his left hand he could do only for 20% of Hiragana, 20% of Katakana, and 90% of Kanji. 4) He could copy Kanji, Hiragana, and figures with either right or left hand. 5) He could point out verbally-presented letters using letter cards whether with the right hand or with the left hand, and could also select the letter card corresponding to the letter visually-presented.(ABSTRACT TRUNCATED AT 250 WORDS)
ERIC Educational Resources Information Center
La Heij, Wido; Boelens, Harrie; Kuipers, Jan-Rouke
2010-01-01
Cascade models of word production assume that during lexical access all activated concepts activate their names. In line with this view, it has been shown that naming an object's colour is facilitated when colour name and object name are phonologically related (e.g., "blue" and "blouse"). Prevor and Diamond's (2005) recent observation that…
76 FR 51002 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... with visual dysfunction related to traumatic brain injury, with an eye injury and a visual acuity in... of visual field in the injured eye. Categories of records in the system: Individual's full name... interventions or other operative procedures, follow up services and treatment, visual outcomes, and records with...
Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.
Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M
2015-01-01
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.
Design of Instrument Control Software for Solar Vector Magnetograph at Udaipur Solar Observatory
NASA Astrophysics Data System (ADS)
Gosain, Sanjay; Venkatakrishnan, P.; Venugopalan, K.
2004-04-01
A magnetograph is an instrument which makes measurement of solar magnetic field by measuring Zeeman induced polarization in solar spectral lines. In a typical filter based magnetograph there are three main modules namely, polarimeter, narrow-band spectrometer (filter), and imager(CCD camera). For a successful operation of magnetograph it is essential that these modules work in synchronization with each other. Here, we describe the design of instrument control system implemented for the Solar Vector Magnetograph under development at Udaipur Solar Observatory. The control software is written in Visual Basic and exploits the Component Object Model (COM) components for a fast and flexible application development. The user can interact with the instrument modules through a Graphical User Interface (GUI) and can program the sequence of magnetograph operations. The integration of Interactive Data Language (IDL) ActiveX components in the interface provides a powerful tool for online visualization, analysis and processing of images.
Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children
ERIC Educational Resources Information Center
Vales, Catarina; Smith, Linda B.
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…
Developmental differences in the naming of contextually non-categorical objects.
Ozcan, Mehmet
2012-02-01
This study investigates the naming process of contextually non-categorical objects in children from 3 to 9 plus 13-year-olds. 112 children participated in the study. Children were asked to narrate a story individually while looking at Mercer Mayer's textless, picture book Frog, where are you? The narratives were audio recorded and transcribed. Texts were analyzed to find out how children at different ages name contextually non-categorical objects, tree and its parts in this case. Our findings revealed that increasing age in children is a positive factor in naming objects that are parts or extended forms of an object which itself constitutes a basic category in a certain context. Younger children used categorical names more frequently to refer to parts or disfigured forms of the object than older children and adults while older children and adults used specified names to refer to the parts or extended forms of the categorical names.
Conscious intention to speak proactively facilitates lexical access during overt object naming
Strijkers, Kristof; Holcomb, Phillip J.; Costa, Albert
2013-01-01
The present study explored when and how the top-down intention to speak influences the language production process. We did so by comparing the brain’s electrical response for a variable known to affect lexical access, namely word frequency, during overt object naming and non-verbal object categorization. We found that during naming, the event-related brain potentials elicited for objects with low frequency names started to diverge from those with high frequency names as early as 152 ms after stimulus onset, while during non-verbal categorization the same frequency comparison appeared 200 ms later eliciting a qualitatively different brain response. Thus, only when participants had the conscious intention to name an object the brain rapidly engaged in lexical access. The data offer evidence that top-down intention to speak proactively facilitates the activation of words related to perceived objects. PMID:24039339
Ouwehand, Kim; van Gog, Tamara; Paas, Fred
2016-10-01
Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.
Is Action Naming Better Preserved (than Object Naming) in Alzheimer's Disease and Why Should We Ask?
ERIC Educational Resources Information Center
Druks, Judit; Masterson, Jackie; Kopelman, Michael; Clare, Linda; Rose, Anita; Rai, Gucharan
2006-01-01
The present study compared object and action naming in patients with Alzheimer's dementia. We tested the hypothesis put forward in (some) previous studies that in Alzheimer's dementia the production of verbs, that is required in action naming, is better preserved than the production of nouns, that is required in object naming. The possible reason…
Joubert, Sven; Brambati, Simona M; Ansado, Jennyfer; Barbeau, Emmanuel J; Felician, Olivier; Didic, Mira; Lacombe, Jacinthe; Goldstein, Rachel; Chayer, Céline; Kergoat, Marie-Jeanne
2010-03-01
Semantic deficits in Alzheimer's disease have been widely documented, but little is known about the integrity of semantic memory in the prodromal stage of the illness. The aims of the present study were to: (i) investigate naming abilities and semantic memory in amnestic mild cognitive impairment (aMCI), early Alzheimer's disease (AD) compared to healthy older subjects; (ii) investigate the association between naming and semantic knowledge in aMCI and AD; (iii) examine if the semantic impairment was present in different modalities; and (iv) study the relationship between semantic performance and grey matter volume using voxel-based morphometry. Results indicate that both naming and semantic knowledge of objects and famous people were impaired in aMCI and early AD groups, when compared to healthy age- and education-matched controls. Item-by-item analyses showed that anomia in aMCI and early AD was significantly associated with underlying semantic knowledge of famous people but not with semantic knowledge of objects. Moreover, semantic knowledge of the same concepts was impaired in both the visual and the verbal modalities. Finally, voxel-based morphometry analyses revealed that semantic impairment in aMCI and AD was associated with cortical atrophy in the anterior temporal lobe (ATL) region as well as in the inferior prefrontal cortex (IPC), some of the key regions of the semantic cognition network. These findings suggest that the semantic impairment in aMCI may result from a breakdown of semantic knowledge of famous people and objects, combined with difficulties in the selection, manipulation and retrieval of this knowledge. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
How a surgeon becomes superman by visualization of intelligently fused multi-modalities
NASA Astrophysics Data System (ADS)
Erat, Okan; Pauly, Olivier; Weidert, Simon; Thaller, Peter; Euler, Ekkehard; Mutschler, Wolf; Navab, Nassir; Fallavollita, Pascal
2013-03-01
Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.
Components of action representations evoked when identifying manipulable objects
Bub, Daniel N.; Masson, Michael E. J.; Lin, Terry
2015-01-01
We examined the influence of holding planned hand actions in working memory on the time taken to visually identify objects with handles. Features of the hand actions and position of the object's handle were congruent or incongruent on two dimensions: alignment (left vs. right) and orientation (horizontal vs. vertical). When an object was depicted in an upright view, subjects were slower to name it when its handle was congruent with the planned hand actions on one dimension but incongruent on the other, relative to when the object handle and actions were congruent on both or neither dimension. This pattern is consistent with many other experiments demonstrating that a cost occurs when there is partial feature overlap between a planned action and a perceived target. An opposite pattern of results was obtained when the depicted object appeared in a 90° rotated view (e.g., a beer mug on its side), suggesting that the functional goal associated with the object (e.g., drinking from an upright beer mug) was taken into account during object perception and that this knowledge superseded the influence of the action afforded by the depicted view of the object. These results have implications for the relationship between object perception and action representations, and for the mechanisms that support the identification of rotated objects. PMID:25705187
2002-01-01
wrappers to other widely used languages, namely TCL/TK, Java, and Python . VTK is very powerful and covers polygonal models and image processing classes and...follows: � Large Data Visualization and Rendering � Information Visualization for Beginners � Rendering and Visualization in Parallel Environments
Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K
2012-03-01
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.
Jodzio, Krzysztof; Biechowska, Daria; Leszniewska-Jodzio, Barbara
2008-09-01
Several neuropsychological studies have shown that patients with brain damage may demonstrate selective category-specific deficits of auditory comprehension. The present paper reports on an investigation of aphasic patients' preserved ability to perform a semantic task on spoken words despite severe impairment in auditory comprehension, as shown by failure in matching spoken words to pictured objects. Twenty-six aphasic patients (11 women and 15 men) with impaired speech comprehension due to a left-hemisphere ischaemic stroke were examined; all were right-handed and native speakers of Polish. Six narrowly defined semantic categories for which dissociations have been reported are colors, body parts, animals, food, objects (mostly tools), and means of transportation. An analysis using one-way ANOVA with repeated measures in conjunction with the Lambda-Wilks Test revealed significant discrepancies among these categories in aphasic patients, who had much more difficulty comprehending names of colors than they did comprehending names of other objects (F((5,21))=13.15; p<.001). Animals were most often the easiest category to understand. The possibility of a simple explanation in terms of word frequency and/or visual complexity was ruled out. Evidence from the present study support the position that so called "global" aphasia is an imprecise term and should be redefined. These results are discussed within the connectionist and modular perspectives on category-specific deficits in aphasia.
Effects of Perceptual and Contextual Enrichment on Visual Confrontation Naming in Adult Aging
ERIC Educational Resources Information Center
Rogalski, Yvonne; Peelle, Jonathan E.; Reilly, Jamie
2011-01-01
Purpose: The purpose of this study was to determine the effects of enriching line drawings with color/texture and environmental context as a facilitator of naming speed and accuracy in older adults. Method: Twenty young and 23 older adults named high-frequency picture stimuli from the Boston Naming Test (Kaplan, Goodglass, & Weintraub, 2001) under…
Multidimensional brain activity dictated by winner-take-all mechanisms.
Tozzi, Arturo; Peters, James F
2018-06-21
A novel demon-based architecture is introduced to elucidate brain functions such as pattern recognition during human perception and mental interpretation of visual scenes. Starting from the topological concepts of invariance and persistence, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, demons that recognize short straight-line segments, curved lines and scene shapes, such as shape interior, density and texture. Low-level representations of objects can be mapped to higher-level views (our mental interpretations): a series of transformations can be gradually applied to a pattern in a visual scene, without affecting its invariant properties. This makes it possible to construct a symbolic multi-dimensional representation of the environment. These representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. Although perceived shapes are 3-dimensional (plus time), the evaluation of shape features (volume, color, contour, closeness, texture, and so on) leads to n-dimensional brain landscapes. Here we discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system's evolution. Copyright © 2018 Elsevier B.V. All rights reserved.
Extraction of composite visual objects from audiovisual materials
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal
1999-08-01
An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.
Meilinger, Tobias; Schulte-Pelkum, Jörg; Frankenstein, Julia; Hardiess, Gregor; Laharnar, Naima; Mallot, Hanspeter A; Bülthoff, Heinrich H
2016-01-01
Establishing verbal memory traces for non-verbal stimuli was reported to facilitate or inhibit memory for the non-verbal stimuli. We show that these effects are also observed in a domain not indicated before-wayfinding. Fifty-three participants followed a guided route in a virtual environment. They were asked to remember half of the intersections by relying on the visual impression only. At the other 50% of the intersections, participants additionally heard a place name, which they were asked to memorize. For testing, participants were teleported to the intersections and were asked to indicate the subsequent direction of the learned route. In Experiment 1, intersections' names were arbitrary (i.e., not related to the visual impression). Here, participants performed more accurately at unnamed intersections. In Experiment 2, intersections' names were descriptive and participants' route memory was more accurate at named intersections. Results have implications for naming places in a city and for wayfinding aids.
Meilinger, Tobias; Schulte-Pelkum, Jörg; Frankenstein, Julia; Hardiess, Gregor; Laharnar, Naima; Mallot, Hanspeter A.; Bülthoff, Heinrich H.
2016-01-01
Establishing verbal memory traces for non-verbal stimuli was reported to facilitate or inhibit memory for the non-verbal stimuli. We show that these effects are also observed in a domain not indicated before—wayfinding. Fifty-three participants followed a guided route in a virtual environment. They were asked to remember half of the intersections by relying on the visual impression only. At the other 50% of the intersections, participants additionally heard a place name, which they were asked to memorize. For testing, participants were teleported to the intersections and were asked to indicate the subsequent direction of the learned route. In Experiment 1, intersections' names were arbitrary (i.e., not related to the visual impression). Here, participants performed more accurately at unnamed intersections. In Experiment 2, intersections' names were descriptive and participants' route memory was more accurate at named intersections. Results have implications for naming places in a city and for wayfinding aids. PMID:26869975
Lida Cochran, Matriarch of Visual Literacy
ERIC Educational Resources Information Center
Davis, Harry
2009-01-01
In this article, the author describes the life and works of Lida Cochran, the matriarch of visual literacy. Lida was practicing "visual literacy" long before there was an association devoted to it. Lida has worked with the AECT, ECT Foundation (the Cochran Internship is named for her husband, Lee Cochran), and the International Visual Literacy…
Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D
2015-12-01
Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).
Systems and Methods for Data Visualization Using Three-Dimensional Displays
NASA Technical Reports Server (NTRS)
Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)
2017-01-01
Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.
How reading differs from object naming at the neuronal level.
Price, C J; McCrory, E; Noppeney, U; Mechelli, A; Moore, C J; Biggio, N; Devlin, J T
2006-01-15
This paper uses whole brain functional neuroimaging in neurologically normal participants to explore how reading aloud differs from object naming in terms of neuronal implementation. In the first experiment, we directly compared brain activation during reading aloud and object naming. This revealed greater activation for reading in bilateral premotor, left posterior superior temporal and precuneus regions. In a second experiment, we segregated the object-naming system into object recognition and speech production areas by factorially manipulating the presence or absence of objects (pictures of objects or their meaningless scrambled counterparts) with the presence or absence of speech production (vocal vs. finger press responses). This demonstrated that the areas associated with speech production (object naming and repetitively saying "OK" to meaningless scrambled pictures) corresponded exactly to the areas where responses were higher for reading aloud than object naming in Experiment 1. Collectively the results suggest that, relative to object naming, reading increases the demands on shared speech production processes. At a cognitive level, enhanced activation for reading in speech production areas may reflect the multiple and competing phonological codes that are generated from the sublexical parts of written words. At a neuronal level, it may reflect differences in the speed with which different areas are activated and integrate with one another.
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.
Effects of Minority Status on Facial Recognition and Naming Performance.
ERIC Educational Resources Information Center
Roberts, Richard J.; Hamsher, Kerry
1984-01-01
Examined the differential effects of minority status in Blacks (N=94) on a facial recognition test and a naming test. Results showed that performance on the facial recognition test was relatively free of racial bias, but this was not the case for visual naming. (LLL)
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
NASA Astrophysics Data System (ADS)
Linsebarth, A.; Moscicka, A.
2010-01-01
The article describes the infl uence of the Bible geographic object peculiarities on the spatiotemporal geoinformation system of the Bible events. In the proposed concept of this system the special attention was concentrated to the Bible geographic objects and interrelations between the names of these objects and their location in the geospace. In the Bible, both in the Old and New Testament, there are hundreds of geographical names, but the selection of these names from the Bible text is not so easy. The same names are applied for the persons and geographic objects. The next problem which arises is the classification of the geographical object, because in several cases the same name is used for the towns, mountains, hills, valleys etc. Also very serious problem is related to the time-changes of the names. The interrelation between the object name and its location is also complicated. The geographic object of this same name is located in various places which should be properly correlated with the Bible text. Above mentioned peculiarities of Bible geographic objects infl uenced the concept of the proposed system which consists of three databases: reference, geographic object, and subject/thematic. The crucial component of this system is proper architecture of the geographic object database. In the paper very detailed description of this database is presented. The interrelation between the databases allows to the Bible readers to connect the Bible text with the geography of the terrain on which the Bible events occurred and additionally to have access to the other geographical and historical information related to the geographic objects.
NASA Astrophysics Data System (ADS)
Halik, Łukasz
2012-11-01
The objective of the present deliberations was to systematise our knowledge of static visual variables used to create cartographic symbols, and also to analyse the possibility of their utilisation in the Augmented Reality (AR) applications on smartphone-type mobile devices. This was accomplished by combining the visual variables listed over the years by different researchers. Research approach was to determine the level of usefulness of particular characteristics of visual variables such as selective, associative, quantitative and order. An attempt was made to provide an overview of static visual variables and to describe the AR system which is a new paradigm of the user interface. Changing the approach to the presentation of point objects is caused by applying different perspective in the observation of objects (egocentric view) than it is done on traditional analogue maps (geocentric view). Presented topics will refer to the fast-developing field of cartography, namely mobile cartography. Particular emphasis will be put on smartphone-type mobile devices and their applicability in the process of designing cartographic symbols. Celem artykułu było usystematyzowanie wiedzy na temat statycznych zmiennych wizualnych, które sa kluczowymi składnikami budujacymi sygnatury kartograficzne. Podjeto próbe zestawienia zmiennych wizualnych wyodrebnionych przez kartografów na przestrzeni ostatnich piecdziesieciu lat, zaczynajac od klasyfikacji przedstawionej przez J. Bertin’a. Dokonano analizy stopnia uzytecznosci poszczególnych zmiennych graficznych w aspekcie ich wykorzystania w projektowaniu znaków punktowych dla mobilnych aplikacji tworzonych w technologii Rzeczywistosci Rozszerzonej (Augmented Reality). Zmienne poddano analizie pod wzgledem czterech charakterystyk: selektywnosci, skojarzeniowosci, odzwierciedlenia ilosci oraz porzadku. W artykule zwrócono uwage na odmienne zastosowanie perspektywy pomiedzy tradycyjnymi analogowymi mapami (geocentrycznosc) a aplikacjami tworzonymi w technologii Rozszerzonej Rzeczywistosci (egocentrycznosc). Tresci prezentowane w pracy dotycza szybko rozwijajacej sie gałezi kartografii - kartografii mobilnej. Dodatkowy nacisk połozony został na próbe implementacji załozen projektowania punktowych znaków kartograficznych na urzadzenia mobilne typu smartphone.
WinTICS-24 --- A Telescope Control Interface for MS Windows
NASA Astrophysics Data System (ADS)
Hawkins, R. Lee
1995-12-01
WinTICS-24 is a telescope control system interface and observing assistant written in Visual Basic for MS Windows. It provides the ability to control a telescope and up to 3 other instruments via the serial ports on an IBM-PC compatible computer, all from one consistent user interface. In addition to telescope control, WinTICS contains an observing logbook, trouble log (which can automatically email its entries to a responsible person), lunar phase display, object database (which allows the observer to type in the name of an object and automatically slew to it), a time of minimum calculator for eclipsing binary stars, and an interface to the Guide CD-ROM for bringing up finder charts of the current telescope coordinates. Currently WinTICS supports control of DFM telescopes, but is easily adaptable to other telescopes and instrumentation.
Newborn chickens generate invariant object representations at the onset of visual object experience
Wood, Justin N.
2013-01-01
To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain. PMID:23918372
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk
Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less
New insight in spiral drawing analysis methods - Application to action tremor quantification.
Legrand, André Pierre; Rivals, Isabelle; Richard, Aliénor; Apartis, Emmanuelle; Roze, Emmanuel; Vidailhet, Marie; Meunier, Sabine; Hainque, Elodie
2017-10-01
Spiral drawing is one of the standard tests used to assess tremor severity for the clinical evaluation of medical treatments. Tremor severity is estimated through visual rating of the drawings by movement disorders experts. Different approaches based on the mathematical signal analysis of the recorded spiral drawings were proposed to replace this rater dependent estimate. The objective of the present study is to propose new numerical methods and to evaluate them in terms of agreement with visual rating and reproducibility. Series of spiral drawings of patients with essential tremor were visually rated by a board of experts. In addition to the usual velocity analysis, three new numerical methods were tested and compared, namely static and dynamic unraveling, and empirical mode decomposition. The reproducibility of both visual and numerical ratings was estimated, and their agreement was evaluated. The statistical analysis demonstrated excellent agreement between visual and numerical ratings, and more reproducible results with numerical methods than with visual ratings. The velocity method and the new numerical methods are in good agreement. Among the latter, static and dynamic unravelling both display a smaller dispersion and are easier for automatic analysis. The reliable scores obtained through the proposed numerical methods allow considering that their implementation on a digitized tablet, be it connected with a computer or independent, provides an efficient automatic tool for tremor severity assessment. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Semantic distance effects on object and action naming.
Vigliocco, Gabriella; Vinson, David P; Damian, Markus F; Levelt, Willem
2002-10-01
Graded interference effects were tested in a naming task, in parallel for objects and actions. Participants named either object or action pictures presented in the context of other pictures (blocks) that were either semantically very similar, or somewhat semantically similar or semantically dissimilar. We found that naming latencies for both object and action words were modulated by the semantic similarity between the exemplars in each block, providing evidence in both domains of graded semantic effects.
Dissociation of the Neural Correlates of Visual and Auditory Contextual Encoding
ERIC Educational Resources Information Center
Gottlieb, Lauren J.; Uncapher, Melina R.; Rugg, Michael D.
2010-01-01
The present study contrasted the neural correlates of encoding item-context associations according to whether the contextual information was visual or auditory. Subjects (N = 20) underwent fMRI scanning while studying a series of visually presented pictures, each of which co-occurred with either a visually or an auditorily presented name. The task…
75 FR 66131 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... Integration and Visualization System,'' JUSTICE/FBI-021, which describes the Data Integration and... Liberties Officer. JUSTICE/FBI-021 SYSTEM NAME: Data Integration and Visualization System. * * * * * SYSTEM...
ERIC Educational Resources Information Center
Matatyaho, Dalit J.; Gogate, Lakshmi J.
2008-01-01
Mothers' use of specific types of object motion in synchrony with object naming was examined, along with infants' joint attention to the mother and object, as a predictor of word learning. During a semistructured 3-min play episode, mothers (N = 24) taught the names of 2 toy objects to their preverbal 6- to 8-month-old infants. The episodes were…
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
How brand names are special: brands, words, and hemispheres.
Gontijo, Possidonia F D; Rayman, Janice; Zhang, Shi; Zaidel, Eran
2002-09-01
Previous research has consistently shown differences between the processing of proper names and of common nouns, leading to the belief that proper names possess a special neuropsychological status. We investigate the category of brand names and suggest that brand names also have a special neuropsychological status, but one which is different from proper names. The findings suggest that the hemispheric lexical status of the brand names is mixed--they behave like words in some respects and like nonwords in others. Our study used familiar upper case brand names, common nouns, and two different types of nonwords ("weird" and "normal") differing in length, as stimuli in a lateralized lexical decision task (LDT). Common nouns, brand names, weird nonwords, and normal nonwords were recognized in that decreasing order of speed and accuracy. A right visual field (RVF) advantage was found for all four lexical types. Interestingly, brand names, similar to nonwords, were found to be less lateralized than common nouns, consistent with theories of category-specific lexical processing. Further, brand names were the only type of lexical items to show a capitalization effect: brand names were recognized faster when they were presented in upper case than in lower case. In addition, while string length affected the recognition of common nouns only in the left visual field (LVF) and the recognition of nonwords only in the RVF, brand names behaved like common nouns in exhibiting length effects only in the LVF. Copyright 2002 Elsevier Science (USA)
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
Factors influencing the response latencies of subnormal children in naming pictures.
Elliott, C
1978-08-01
The times taken to name 56 drawings of objects on five separate occasions were analysed for 21 ESN(M) and 21 ESN(S) children, matched for picture-naming vocabulary. The ESN(S) group not only had a higher mean response latency but also showed greater inter- and intra-subject variance. Nine objects were selected whose names have a Thorndike-Lorge language frequency of 50 words per million or greater, and nine others were selected with a frequency of less than 50 words per million. Each object was drawn in two ways, one giving a two-dimensional outline with the addition of important detail, the other drawing also incorporating cues indicating the depth of the object. An analysis of variance of the children's latencies in naming the selected 36 pictures of 18 objects over five trials indicated that the method of drawing had no effect upon naming latencies. Pictures with high-frequency names were named faster than those with lower frequency names, the ESN(S) group showing a greater rate of increase in naming latency for the lower frequency words than the ESN(M) children. Results were discussed in terms of the Oldfield and Lachman models of lexical memory storage and of the search processes required for the retrieval of names.
Representational neglect for words as revealed by bisection tasks.
Arduino, Lisa S; Marinelli, Chiara Valeria; Pasotti, Fabrizio; Ferrè, Elisa Raffaella; Bottini, Gabriella
2012-03-01
In the present study, we showed that a representational disorder for words can dissociate from both representational neglect for objects and neglect dyslexia. This study involved 14 brain-damaged patients with left unilateral spatial neglect and a group of normal subjects. Patients were divided into four groups based on presence of left neglect dyslexia and representational neglect for non-verbal material, as evaluated by the Clock Drawing test. The patients were presented with bisection tasks for words and lines. The word bisection tasks (with words of five and seven letters) comprised the following: (1) representational bisection: the experimenter pronounced a word and then asked the patient to name the letter in the middle position; (2) visual bisection: same as (1) with stimuli presented visually; and (3) motor bisection: the patient was asked to cross out the letter in the middle position. The standard line bisection task was presented using lines of different length. Consistent with the literature, long lines were bisected to the right and short lines, rendered comparable in length to the words of the word bisection test, deviated to the left (crossover effect). Both patients and controls showed the same leftward bias on words in the visual and motor bisection conditions. A significant difference emerged between the groups only in the case of the representational bisection task, whereas the group exhibiting neglect dyslexia associated with representational neglect for objects showed a significant rightward bias, while the other three patient groups and the controls showed a leftward bisection bias. Neither the presence of neglect alone nor the presence of visual neglect dyslexia was sufficient to produce a specific disorder in mental imagery. These results demonstrate a specific representational neglect for words independent of both representational neglect and neglect dyslexia. ©2011 The British Psychological Society.
Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław
2014-06-05
"SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.
Balthazar, Marcio L.F.; Yasuda, Clarissa L.; Lopes, Tátila M.; Pereira, Fabrício R.S.; Damasceno, Benito Pereira; Cendes, Fernando
2011-01-01
Neuroanatomical correlations of naming and lexical-semantic memory are not yet fully understood. The most influential approaches share the view that semantic representations reflect the manner in which information has been acquired through perception and action, and that each brain area processes different modalities of semantic representations. Despite these anatomical differences in semantic processing, generalization across different features that have similar semantic significance is one of the main characteristics of human cognition. Methods We evaluated the brain regions related to naming, and to the semantic generalization, of visually presented drawings of objects from the Boston Naming Test (BNT), which comprises different categories, such as animals, vegetables, tools, food, and furniture. In order to create a model of lesion method, a sample of 48 subjects presenting with a continuous decline both in cognitive functions, including naming skills, and in grey matter density (GMD) was compared to normal young adults with normal aging, amnestic mild cognitive impairment (aMCI) and mild Alzheimer’s disease (AD). Semantic errors on the BNT, as well as naming performance, were correlated with whole brain GMD as measured by voxel-based morphometry (VBM). Results The areas most strongly related to naming and to semantic errors were the medial temporal structures, thalami, superior and inferior temporal gyri, especially their anterior parts, as well as prefrontal cortices (inferior and superior frontal gyri). Conclusion The possible role of each of these areas in the lexical-semantic networks was discussed, along with their contribution to the models of semantic memory organization. PMID:29213726
Category-Specific Naming and Recognition Deficits in Temporal Lobe Epilepsy Surgical Patients
Drane, Daniel L.; Ojemann, George A.; Aylward, Elizabeth; Ojemann, Jeffrey G.; Johnson, L. Clark; Silbergeld, Daniel L.; Miller, John W.; Tranel, Daniel
2008-01-01
Objective Based upon Damasio's “Convergence Zone” model of semantic memory, we predicted that epilepsy surgical patients with anterior temporal lobe (TL) seizure onset would exhibit a pattern of category-specific naming and recognition deficits not observed in patients with seizures arising elsewhere. Methods We assessed epilepsy patients with unilateral seizure onset of anterior TL or other origin (n = 22), pre- or postoperatively, using a set of category-specific items and a conventional measure of visual naming (Boston Naming Test: BNT). Results Category-specific naming deficits were exhibited by patients with dominant anterior TL seizure onset/resection for famous faces and animals, while category-specific recognition deficits for these same categories were exhibited by patients with nondominant anterior TL onset/resection. Patients with other seizure onset did not exhibit category-specific deficits. Naming and recognition deficits were frequently not detected by the BNT, which samples only a limited range of stimuli. Interpretation Consistent with the “convergence zone” framework, results suggest that the nondominant anterior TL plays a major role in binding sensory information into conceptual percepts for certain stimuli, while dominant TL regions function to provide a link to verbal labels for these percepts. Although observed category-specific deficits were striking, they were often missed by the BNT, suggesting that they are more prevalent than recognized in both pre- and postsurgical epilepsy patients. Systematic investigation of these deficits could lead to more refined models of semantic memory, aid in the localization of seizures, and contribute to modifications in surgical technique and patient selection in epilepsy surgery to improve neurocognitive outcome. PMID:18206185
Styles, Suzy J; Plunkett, Kim; Duta, Mihaela D
2015-10-01
Recent behavioural studies with toddlers have demonstrated that simply viewing a picture in silence triggers a cascade of linguistic processing which activates a representation of the picture's name (Mani and Plunkett, 2010, 2011). Electrophysiological studies have also shown that viewing a picture modulates the auditory evoked potentials (AEPs) triggered by later speech, from early in the second year of life (Duta et al., 2012; Friedrich and Friederici, 2005; Mani et al., 2011) further supporting the notion that picture viewing gives rise to a representation of the picture's name against which later speech can be matched. However, little is known about how and when the implicit name arises during picture viewing, or about the electrophysiological activity which supports this linguistic process. We report differences in the visual evoked potentials (VEPs) of fourteen-month-old infants who saw photographs of animals and objects, some of which were name-known (lexicalized), while waiting for an auditory label to be presented. During silent picture viewing, lateralized neural activity was selectively triggered by lexicalized items, as compared to nameless items. Lexicalized items generated a short-lasting negative-going deflection over frontal, left centro-temporal, and left occipital regions shortly after the picture appeared (126-225 ms). A positive deflection was also observed over the right hemisphere (particularly centro-temporal regions) in a later, longer-lasting window (421-720 ms). The lateralization of these differences in the VEP suggests the possible involvement of linguistic processes during picture viewing, and may reflect activity involved in the implicit activation of the picture's name. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Styles, Suzy J.; Plunkett, Kim; Duta, Mihaela D.
2015-01-01
Recent behavioural studies with toddlers have demonstrated that simply viewing a picture in silence triggers a cascade of linguistic processing which activates a representation of the picture’s name (Mani and Plunkett, 2010, 2011). Electrophysiological studies have also shown that viewing a picture modulates the auditory evoked potentials (AEPs) triggered by later speech, from early in the second year of life (Duta et al., 2012; Friedrich and Friederici, 2005; Mani et al., 2011) further supporting the notion that picture viewing gives rise to a representation of the picture’s name against which later speech can be matched. However, little is known about how and when the implicit name arises during picture viewing, or about the electrophysiological activity which supports this linguistic process. We report differences in the visual evoked potentials (VEPs) of fourteen-month-old infants who saw photographs of animals and objects, some of which were name-known (lexicalized), while waiting for an auditory label to be presented. During silent picture viewing, lateralized neural activity was selectively triggered by lexicalized items, as compared to nameless items. Lexicalized items generated a short-lasting negative-going deflection over frontal, left centro-temporal, and left occipital regions shortly after the picture appeared (126–225 ms). A positive deflection was also observed over the right hemisphere (particularly centro-temporal regions) in a later, longer-lasting window (421–720 ms). The lateralization of these differences in the VEP suggests the possible involvement of linguistic processes during picture viewing, and may reflect activity involved in the implicit activation of the picture’s name. PMID:26232744
ERIC Educational Resources Information Center
Gogate, Lakshmi J.; Bolzani, Laura H.; Betancourt, Eugene A.
2006-01-01
We examined whether mothers' use of temporal synchrony between spoken words and moving objects, and infants' attention to object naming, predict infants' learning of word-object relations. Following 5 min of free play, 24 mothers taught their 6- to 8-month-olds the names of 2 toy objects, "Gow" and "Chi," during a 3-min play…
Pure associative tactile agnosia for the left hand: clinical and anatomo-functional correlations.
Veronelli, Laura; Ginex, Valeria; Dinacci, Daria; Cappa, Stefano F; Corbo, Massimo
2014-09-01
Associative tactile agnosia (TA) is defined as the inability to associate information about object sensory properties derived through tactile modality with previously acquired knowledge about object identity. The impairment is often described after a lesion involving the parietal cortex (Caselli, 1997; Platz, 1996). We report the case of SA, a right-handed 61-year-old man affected by first ever right hemispheric hemorrhagic stroke. The neurological examination was normal, excluding major somaesthetic and motor impairment; a brain magnetic resonance imaging (MRI) confirmed the presence of a right subacute hemorrhagic lesion limited to the post-central and supra-marginal gyri. A comprehensive neuropsychological evaluation detected a selective inability to name objects when handled with the left hand in the absence of other cognitive deficits. A series of experiments were conducted in order to assess each stage of tactile recognition processing using the same stimulus sets: materials, 3D geometrical shapes, real objects and letters. SA and seven matched controls underwent the same experimental tasks during four sessions in consecutive days. Tactile discrimination, recognition, pantomime, drawing after haptic exploration out of vision and tactile-visual matching abilities were assessed. In addition, we looked for the presence of a supra-modal impairment of spatial perception and of specific difficulties in programming exploratory movements during recognition. Tactile discrimination was intact for all the stimuli tested. In contrast, SA was able neither to recognize nor to pantomime real objects manipulated with the left hand out of vision, while he identified them with the right hand without hesitations. Tactile-visual matching was intact. Furthermore, SA was able to grossly reproduce the global shape in drawings but failed to extract details of objects after left-hand manipulation, and he could not identify objects after looking at his own drawings. This case confirms the existence of selective associative TA as a left hand-specific deficit in recognizing objects. This deficit is not related to spatial perception or to the programming of exploratory movements. The cross-modal transfer of information via visual perception permits the activation of a partially degraded image, which alone does not allow the proper recognition of the initial tactile stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Motor-visual neurons and action recognition in social interactions.
de la Rosa, Stephan; Bülthoff, Heinrich H
2014-04-01
Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions - namely, context-specific and contingency based learning.
van Boxtel, M P; ten Tusscher, M P; Metsemakers, J F; Willems, B; Jolles, J
2001-10-01
It is unknown to what extent the performance on the Stroop color-word test is affected by reduced visual function in older individuals. We tested the impact of common deficiencies in visual function (reduced distant and close acuity, reduced contrast sensitivity, and color weakness) on Stroop performance among 821 normal individuals aged 53 and older. After adjustment for age, sex, and educational level, low contrast sensitivity was associated with more time needed on card I (word naming), red/green color weakness with slower card 2 performance (color naming), and reduced distant acuity with slower performance on card 3 (interference). Half of the age-related variance in speed performance was shared with visual function. The actual impact of reduced visual function may be underestimated in this study when some of this age-related variance in Stroop performance is mediated by visual function decrements. It is suggested that reduced visual function has differential effects on Stroop performance which need to be accounted for when the Stroop test is used both in research and in clinical settings. Stroop performance measured from older individuals with unknown visual status should be interpreted with caution.
VISUAL TRAINING AND READING PERFORMANCE.
ERIC Educational Resources Information Center
ANAPOLLE, LOUIS
VISUAL TRAINING IS DEFINED AS THE FIELD OF OCULAR REEDUCATION AND REHABILITATION OF THE VARIOUS VISUAL SKILLS THAT ARE OF PARAMOUNT IMPORTANCE TO SCHOOL ACHIEVEMENT, AUTOMOBILE DRIVING, OUTDOOR SPORTS ACTIVITIES, AND OCCUPATIONAL PURSUITS. A HISTORY OF ORTHOPTICS, THE SUGGESTED NAME FOR THE ENTIRE FIELD OF OCULAR REEDUCATION, IS GIVEN. READING AS…
Visual Processing Deficits in Children with Slow RAN Performance
ERIC Educational Resources Information Center
Stainthorp, Rhona; Stuart, Morag; Powell, Daisy; Quinlan, Philip; Garwood, Holly
2010-01-01
Two groups of 8- to 10-year-olds differing in rapid automatized naming speed but matched for age, verbal and nonverbal ability, phonological awareness, phonological memory, and visual acuity participated in four experiments investigating early visual processing. As low RAN children had significantly slower simple reaction times (SRT) this was…
Predictable Locations Aid Early Object Name Learning
Benitez, Viridiana L.; Smith, Linda B.
2012-01-01
Expectancy-based localized attention has been shown to promote the formation and retrieval of multisensory memories in adults. Three experiments show that these processes also characterize attention and learning in 16- to 18- month old infants and, moreover, that these processes may play a critical role in supporting early object name learning. The three experiments show that infants learn names for objects when those objects have predictable rather than varied locations, that infants who anticipate the location of named objects better learn those object names, and that infants integrate experiences that are separated in time but share a common location. Taken together, these results suggest that localized attention, cued attention, and spatial indexing are an inter-related set of processes in young children that aid in the early building of coherent object representations. The relevance of the experimental results and spatial attention for everyday word learning are discussed. PMID:22989872
Object representations in visual memory: evidence from visual illusions.
Ben-Shalom, Asaf; Ganel, Tzvi
2012-07-26
Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.
1990-04-01
ingtoa~, DC 3. SPON9ORING; MONITORING AGENCY NAME(S) AND ADORESS4ES) 10. SPUZOVA / MONITORING US Army Ballistic Researh Laboratory AGENCY SEP06... Biology I Kenyon B. De Greene Washington Square Center for NS Institute of Safety & Systems Management New York, NY 10003 University of Southern...Chicago, IL 60637 -- I Donald A. Glaser University of California - Berkeley I Philip B. Hollander ---Deptartment of Molecular Biology Ohio State College
Visualizing the Fundamental Physics of Rapid Earth Penetration Using Transparent Soils
2015-03-01
L R E P O R T DTRA-TR-14-80 Visualizing the Fundamental Physics of Rapid Earth Penetration Using Transparent Soils Approved for public... ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...dose absorbed) roentgen shake slug torr (mm Hg, 0 C) *The bacquerel (Bq) is the SI unit of radioactivity ; 1 Bq = 1 event/s. **The Gray (GY) is
Demographic factors and retrieval of object and proper names after age 70
Fridkin, Shimon; Ayalon, Liat
2018-01-01
Purpose This research aimed to investigate whether demographic factors are similarly related to retrieval of object and proper names. Methods The sample included 5,907 individuals above age 70 who participated in the Health and Retirement Study between 2004 and 2012. Participants were asked to name two objects as well as the US President and Vice President. Latent growth curve models examined the associations of age, education, and self-rated health with baseline levels and change trajectories in retrieval. Results Age and education were more strongly related to retrieval of proper names than to retrieval of object names, both for baseline scores and for change trajectory. Similar effects of self-rated health emerged for both types of stimuli. Conclusions The results show that examining object names and proper names together as indication of cognitive status in the HRS might overlook important differences between the two types of stimuli, in both baseline performance and longitudinal change. PMID:29370264
Automatic Recognition of Object Names in Literature
NASA Astrophysics Data System (ADS)
Bonnin, C.; Lesteven, S.; Derriere, S.; Oberto, A.
2008-08-01
SIMBAD is a database of astronomical objects that provides (among other things) their bibliographic references in a large number of journals. Currently, these references have to be entered manually by librarians who read each paper. To cope with the increasing number of papers, CDS develops a tool to assist the librarians in their work, taking advantage of the Dictionary of Nomenclature of Celestial Objects, which keeps track of object acronyms and of their origin. The program searches for object names directly in PDF documents by comparing the words with all the formats stored in the Dictionary of Nomenclature. It also searches for variable star names based on constellation names and for a large list of usual names such as Aldebaran or the Crab. Object names found in the documents often correspond to several astronomical objects. The system retrieves all possible matches, displays them with their object type given by SIMBAD, and lets the librarian make the final choice. The bibliographic reference can then be automatically added to the object identifiers in the database. Besides, the systematic usage of the Dictionary of Nomenclature, which is updated manually, permitted to automatically check it and to detect errors and inconsistencies. Last but not least, the program collects some additional information such as the position of the object names in the document (in the title, subtitle, abstract, table, figure caption...) and their number of occurrences. In the future, this will permit to calculate the 'weight' of an object in a reference and to provide SIMBAD users with an important new information, which will help them to find the most relevant papers in the object reference list.
Effect of Message Type on the Visual Attention of Adults With Traumatic Brain Injury.
Thiessen, Amber; Brown, Jessica; Beukelman, David; Hux, Karen; Myers, Angela
2017-05-17
The purpose of this investigation was to measure the effect of message type (i.e., action, naming) on the visual attention patterns of individuals with and without traumatic brain injury (TBI) when viewing grids composed of 3 types of images (i.e., icons, decontextualized photographs, and contextualized photographs). Fourteen adults with TBI and 14 without TBI-assigned either to an action or naming message condition-viewed grids composed of 3 different image types. Participants' task was to select/sustain visual fixation on the image they felt best represented a stated message (i.e., action or naming). With final fixation location serving as a proxy for selection, participants in the naming message condition selected decontextualized photographs significantly more often than the other 2 image types. Participants in the action message condition selected contextualized photographs significantly more frequently than the other 2 image types. Minimal differences were noted between participant groups. This investigation provides preliminary evidence of the relationship between image and message type. Clinicians involved in the selection of images used for message representation should consider the message being represented when designing supports for people with TBI. Further research is necessary to fully understand the relationship between images and message type.
Xiao, Youping; Kavanau, Christopher; Bertin, Lauren; Kaplan, Ehud
2011-01-01
Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of "warm" and "cool". This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L- versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.
Conscious experience and episodic memory: hippocampus at the crossroads.
Behrendt, Ralf-Peter
2013-01-01
If an instance of conscious experience of the seemingly objective world around us could be regarded as a newly formed event memory, much as an instance of mental imagery has the content of a retrieved event memory, and if, therefore, the stream of conscious experience could be seen as evidence for ongoing formation of event memories that are linked into episodic memory sequences, then unitary conscious experience could be defined as a symbolic representation of the pattern of hippocampal neuronal firing that encodes an event memory - a theoretical stance that may shed light into the mind-body and binding problems in consciousness research. Exceedingly detailed symbols that describe patterns of activity rapidly self-organizing, at each cycle of the θ rhythm, in the hippocampus are instances of unitary conscious experience that jointly constitute the stream of consciousness. Integrating object information (derived from the ventral visual stream and orbitofrontal cortex) with contextual emotional information (from the anterior insula) and spatial environmental information (from the dorsal visual stream), the hippocampus rapidly forms event codes that have the informational content of objects embedded in an emotional and spatiotemporally extending context. Event codes, formed in the CA3-dentate network for the purpose of their memorization, are not only contextualized but also allocentric representations, similarly to conscious experiences of events and objects situated in a seemingly objective and observer-independent framework of phenomenal space and time. Conscious perception, creating the spatially and temporally extending world that we perceive around us, is likely to be evolutionarily related to more fleeting and seemingly internal forms of conscious experience, such as autobiographical memory recall, mental imagery, including goal anticipation, and to other forms of externalized conscious experience, namely dreaming and hallucinations; and evidence pointing to an important contribution of the hippocampus to these conscious phenomena will be reviewed.
Afterimages are biased by top-down information.
Utz, Sandra; Carbon, Claus-Christian
2015-01-01
The afterimage illusion refers to a complementary colored image continuing to appear in the observer's vision after the exposure to the original image has ceased. It is assumed to be a phenomenon of the primary visual pathway, caused by overstimulation of photoreceptors of the retina. The aim of the present study was to investigate the nature of afterimage perceptions; mainly whether it is a mere physical, that is, low-level effect or whether it can be modulated by top-down processes, that is, high-level processes. Participants were first exposed to five either strongly female or male faces (Experiment 1), objects highly associated with female or male gender (Experiment 2) or female versus male names (Experiment 3), followed by a negativated image of a gender-neutral face which had to be fixated for 20s to elicit an afterimage. Participants had to rate their afterimages according to sexual dimorphism, showing that the afterimage of the gender-neutral face was perceived as significantly more female in the female priming condition compared with the male priming condition, independently of the priming quality (faces, objects, and names). Our results documented, in addition to previously presumed bottom-up mechanisms, a prominent influence of top-down processing on the perception of afterimages via priming mechanisms (female primes led to more female afterimage perception). © The Author(s) 2015.
DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide
2013-01-01
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700
Decoding information about dynamically occluded objects in visual cortex
Erlikhman, Gennady; Caplovitz, Gideon P.
2016-01-01
During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder, even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or, non-object-specific information such as its position or velocity as it is tracked behind an occluder as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine the representation of information within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may represent the dynamically occluded object’s position or motion path, while later visual areas represent object-specific information. PMID:27663987
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2010-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: 1) the categorical relationship between the target and the distracters and 2) the visual field in which the target was presented. Similar to controls, the RH patients were faster in detecting targets in the right visual field when the target and distracters had different color names compared to when their names were the same. This effect was absent in the LH patients, consistent with the hypothesis that injury to the left hemisphere handicaps the automatic activation of lexical codes. Moreover, the LH patients showed a reversed effect, such that the advantage of different target-distracter names was now evident for targets in the left visual field. This reversal may suggest a reorganization of the color lexicon in the right hemisphere following left hemisphere brain injury and/or the unmasking of a heightened right hemisphere sensitivity to color categories. PMID:21216454
Infants' prospective control during object manipulation in an uncertain environment.
Gottwald, Janna M; Gredebäck, Gustaf
2015-08-01
This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.
Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.
2016-01-01
Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric
2010-08-01
When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.
Learning and Forgetting New Names and Objects in MCI and AD
ERIC Educational Resources Information Center
Gronholm-Nyman, Petra; Rinne, Juha O.; Laine, Matti
2010-01-01
We studied how subjects with mild cognitive impairment (MCI), early Alzheimer's disease (AD) and age-matched controls learned and maintained the names of unfamiliar objects that were trained with or without semantic support (object definitions). Naming performance, phonological cueing, incidental learning of the definitions and recognition of the…
Unboxing the Black Box of Visual Expertise in Medicine
ERIC Educational Resources Information Center
Jarodzka, Halszka; Boshuizen, Henny P. .
2017-01-01
Visual expertise in medicine has been a subject of research since many decades. Interestingly, it has been investigated from two little related fields, namely the field that focused mainly on the visual search aspects whilst ignoring higher-level cognitive processes involved in medical expertise, and the field that mainly focused on these…
Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity
ERIC Educational Resources Information Center
Chen, Yi-Chuan; Spence, Charles
2011-01-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…
Chapter 18: Web-based Tools - NED VO Services
NASA Astrophysics Data System (ADS)
Mazzarella, J. M.; NED Team
The NASA/IPAC Extragalactic Database (NED) is a thematic, web-based research facility in widespread use by scientists, educators, space missions, and observatory operations for observation planning, data analysis, discovery, and publication of research about objects beyond our Milky Way galaxy. NED is a portal into a systematic fusion of data from hundreds of sky surveys and tens of thousands of research publications. The contents and services span the entire electromagnetic spectrum from gamma rays through radio frequencies, and are continuously updated to reflect the current literature and releases of large-scale sky survey catalogs. NED has been on the Internet since 1990, growing in content, automation and services with the evolution of information technology. NED is the world's largest database of crossidentified extragalactic objects. As of December 2006, the system contains approximately 10 million objects and 15 million multi-wavelength cross-IDs. Over 4 thousand catalogs and published lists covering the entire electromagnetic spectrum have had their objects cross-identified or associated, with fundamental data parameters federated for convenient queries and retrieval. This chapter describes the interoperability of NED services with other components of the Virtual Observatory (VO). Section 1 is a brief overview of the primary NED web services. Section 2 provides a tutorial for using NED services currently available through the NVO Registry. The "name resolver" provides VO portals and related internet services with celestial coordinates for objects specified by catalog identifier (name); any alias can be queried because this service is based on the source cross-IDs established by NED. All major services have been updated to provide output in VOTable (XML) format that can be accessed directly from the NED web interface or using the NVO registry. These include access to images via SIAP, Cone- Search queries, and services providing fundamental, multi-wavelength extragalactic data such as positions, redshifts, photometry and spectral energy distributions (SEDs), and sizes (all with references and uncertainties when available). Section 3 summarizes the advantages of accessing the NED "name resolver" and other NED services via the web to replace the legacy "server mode" custom data structure previously available through a function library provided only in the C programming language. Section 4 illustrates visualization via VOPlot of an SED and the spatial distribution of sources from a NED All-Sky (By Parameters) query. Section 5 describes the new NED Spectral Archive, illustrating how VOTables are being used to standardize the data and metadata as well as the physical units of spectra made available by authors of journal articles and producers of major survey archives; quick-look spectral analysis through convenient interoperability with the SpecView (STScI) Java applet is also shown. Section 6 closes with a summary of the capabilities described herein, which greatly simplify interoperability of NED with other components of the VO, enabling new opportunities for discovery, visualization, and analysis of multiwavelength data.
Episodic and semantic memory in children with mesial temporal sclerosis.
Rzezak, Patricia; Guimarães, Catarina; Fuentes, Daniel; Guerreiro, Marilisa M; Valente, Kette Dualibi Ramos
2011-07-01
The aim of this study was to analyze semantic and episodic memory deficits in children with mesial temporal sclerosis (MTS) and their correlation with clinical epilepsy variables. For this purpose, 19 consecutive children and adolescents with MTS (8 to 16 years old) were evaluated and their performance on five episodic memory tests (short- and long-term memory and learning) and four semantic memory tests was compared with that of 28 healthy volunteers. Patients performed worse on tests of immediate and delayed verbal episodic memory, visual episodic memory, verbal and visual learning, mental scanning for semantic clues, object naming, word definition, and repetition of sentences. Clinical variables such as early age at seizure onset, severity of epilepsy, and polytherapy impaired distinct types of memory. These data confirm that children with MTS have episodic memory deficits and add new information on semantic memory. The data also demonstrate that clinical variables contribute differently to episodic and semantic memory performance. Copyright © 2011 Elsevier Inc. All rights reserved.
Deficit in visual temporal integration in autism spectrum disorders.
Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru
2010-04-07
Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.
Marginalization in neural circuits with divisive normalization
Beck, J.M.; Latham, P.E.; Pouget, A.
2011-01-01
A wide range of computations performed by the nervous system involves a type of probabilistic inference known as marginalization. This computation comes up in seemingly unrelated tasks, including causal reasoning, odor recognition, motor control, visual tracking, coordinate transformations, visual search, decision making, and object recognition, to name just a few. The question we address here is: how could neural circuits implement such marginalizations? We show that when spike trains exhibit a particular type of statistics – associated with constant Fano factors and gain-invariant tuning curves, as is often reported in vivo – some of the more common marginalizations can be achieved with networks that implement a quadratic nonlinearity and divisive normalization, the latter being a type of nonlinear lateral inhibition that has been widely reported in neural circuits. Previous studies have implicated divisive normalization in contrast gain control and attentional modulation. Our results raise the possibility that it is involved in yet another, highly critical, computation: near optimal marginalization in a remarkably wide range of tasks. PMID:22031877
Overview of EVE - the event visualization environment of ROOT
NASA Astrophysics Data System (ADS)
Tadel, Matevž
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
Tafazoli, Sina; Safaai, Houman; De Franceschi, Gioia; Rosselli, Federica Bianca; Vanzella, Walter; Riggi, Margherita; Buffolo, Federica; Panzeri, Stefano; Zoccolan, Davide
2017-01-01
Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI: http://dx.doi.org/10.7554/eLife.22794.001 PMID:28395730
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.
2012-01-01
College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087
Resilience to the contralateral visual field bias as a window into object representations
Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.
2016-01-01
Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998
Sanjuán, Ana; Hope, Thomas M.H.; Parker Jones, 'Ōiwi; Prejawa, Susan; Oberhuber, Marion; Guerin, Julie; Seghier, Mohamed L.; Green, David W.; Price, Cathy J.
2015-01-01
We used fMRI in 35 healthy participants to investigate how two neighbouring subregions in the lateral anterior temporal lobe (LATL) contribute to semantic matching and object naming. Four different levels of processing were considered: (A) recognition of the object concepts; (B) search for semantic associations related to object stimuli; (C) retrieval of semantic concepts of interest; and (D) retrieval of stimulus specific concepts as required for naming. During semantic association matching on picture stimuli or heard object names, we found that activation in both subregions was higher when the objects were semantically related (mug–kettle) than unrelated (car–teapot). This is consistent with both LATL subregions playing a role in (C), the successful retrieval of amodal semantic concepts. In addition, one subregion was more activated for object naming than matching semantically related objects, consistent with (D), the retrieval of a specific concept for naming. We discuss the implications of these novel findings for cognitive models of semantic processing and left anterior temporal lobe function. PMID:25496810
A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.
Establishing Visual Category Boundaries between Objects: A PET Study
ERIC Educational Resources Information Center
Saumier, Daniel; Chertkow, Howard; Arguin, Martin; Whatmough, Cristine
2005-01-01
Individuals with Alzheimer's disease (AD) often have problems in recognizing common objects. This visual agnosia may stem from difficulties in establishing appropriate visual boundaries between visually similar objects. In support of this hypothesis, Saumier, Arguin, Chertkow, and Renfrew (2001) showed that AD subjects have difficulties in…
Hsu, Nina S; Kraemer, David J M; Oliver, Robyn T; Schlichting, Margaret L; Thompson-Schill, Sharon L
2011-09-01
Neuroimaging tests of sensorimotor theories of semantic memory hinge on the extent to which similar activation patterns are observed during perception and retrieval of objects or object properties. The present study was motivated by the hypothesis that some of the seeming discrepancies across studies reflect flexibility in the systems responsible for conceptual and perceptual processing of color. Specifically, we test the hypothesis that retrieval of color knowledge can be influenced by both context (a task variable) and individual differences in cognitive style (a subject variable). In Experiment 1, we provide fMRI evidence for differential activity during color knowledge retrieval by having subjects perform a verbal task, in which context encouraged subjects to retrieve more- or less-detailed information about the colors of named common objects in a blocked experimental design. In the left fusiform, we found more activity during retrieval of more- versus less-detailed color knowledge. We also assessed preference for verbal or visual cognitive style, finding that brain activity in the left lingual gyrus significantly correlated with preference for a visual cognitive style. We replicated many of these effects in Experiment 2, in which stimuli were presented more quickly, in a random order, and in the auditory modality. This illustration of some of the factors that can influence color knowledge retrieval leads to the conclusion that tests of conceptual and perceptual overlap must consider variation in both of these processes.
ERIC Educational Resources Information Center
Wingfield, Arthur; Brownell, Hiram; Hoyte, Ken J.
2006-01-01
Although deficits in confrontation naming are a common consequence of damage to the language areas of the left cerebral hemisphere, some patients with aphasia show relatively good naming ability. We measured effects of repeated practice on naming latencies for a set of pictured objects by three aphasic patients with near-normal naming ability and…
Storage of features, conjunctions and objects in visual working memory.
Vogel, E K; Woodman, G F; Luck, S J
2001-02-01
Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.
[Development of a software for 3D virtual phantom design].
Zou, Lian; Xie, Zhao; Wu, Qi
2014-02-01
In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.
ERIC Educational Resources Information Center
Abass, Bada Tayo; Isyakka, Bello; Olaolu, Ijisakin Yemi; Olusegun, Fajuyigbe Michael
2014-01-01
The study examined the effects of two and three dimensional visual objects on learners' drawing skills in junior secondary schools in OsunState, Nigeria. It also determined students' ability to identify visual objects. Furthermore, it investigated the comparative effectiveness of two and three dimensional visual objects on drawing skills of junior…
Micro-Valences: Perceiving Affective Valence in Everyday Objects
Lebrecht, Sophie; Bar, Moshe; Barrett, Lisa Feldman; Tarr, Michael J.
2012-01-01
Perceiving the affective valence of objects influences how we think about and react to the world around us. Conversely, the speed and quality with which we visually recognize objects in a visual scene can vary dramatically depending on that scene’s affective content. Although typical visual scenes contain mostly “everyday” objects, the affect perception in visual objects has been studied using somewhat atypical stimuli with strong affective valences (e.g., guns or roses). Here we explore whether affective valence must be strong or overt to exert an effect on our visual perception. We conclude that everyday objects carry subtle affective valences – “micro-valences” – which are intrinsic to their perceptual representation. PMID:22529828
Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.
Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno
2004-01-01
Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.
Interpret with caution: multicollinearity in multiple regression of cognitive data.
Morrison, Catriona M
2003-08-01
Shibihara and Kondo in 2002 reported a reanalysis of the 1997 Kanji picture-naming data of Yamazaki, Ellis, Morrison, and Lambon-Ralph in which independent variables were highly correlated. Their addition of the variable visual familiarity altered the previously reported pattern of results, indicating that visual familiarity, but not age of acquisition, was important in predicting Kanji naming speed. The present paper argues that caution should be taken when drawing conclusions from multiple regression analyses in which the independent variables are so highly correlated, as such multicollinearity can lead to unreliable output.
Conceptual Coherence Affects Phonological Activation of Context Objects during Object Naming
ERIC Educational Resources Information Center
Oppermann, Frank; Jescheniak, Jorg D.; Schriefers, Herbert
2008-01-01
In 4 picture-word interference experiments, speakers named a target object that was presented with a context object. Using auditory distractors that were phonologically related or unrelated either to the target object or the context object, the authors assessed whether phonological processing was confined to the target object or not. Phonological…
Bai, Hong-Min; Jiang, Tao; Wang, Wei-Min; Li, Tian-Dong; Liu, Yan; Lu, Yi-Cheng
2011-10-01
Category-specific recognition and naming deficits have been observed in a variety of patient populations. However, the category-specific cortices for naming famous faces, animals and man-made objects remain controversial. The present study aimed to study the specific areas involved in naming pictures of these 3 categories using functional magnetic resonance imaging. Functional images were analyzed using statistical parametric mapping and the 3 different contrasts were evaluated using t statistics by comparing the naming tasks to their baselines. The contrast images were entered into a random-effects group level analysis. The results were reported in Montreal Neurological Institute coordinates, and anatomical regions were identified using an automated anatomical labeling method with XJview 8. Naming famous faces caused more activation in the bilateral head of the hippocampus and amygdala with significant left dominance. Bilateral activation of pars triangularis and pars opercularis in the naming of famous faces was also revealed. Naming animals evoked greater responses in the left supplementary motor area, while naming man-made objects evoked more in the left premotor area, left pars orbitalis and right supplementary motor area. The extent of bilateral fusiform gyri activation by naming man-made objects was much larger than that by naming of famous faces or animals. Even in the overlapping sites of activation, some differences among the categories were found for activation in the fusiform gyri. The cortices involved in the naming process vary with the naming of famous faces, animals and man-made objects. This finding suggests that different categories of pictures should be used during intra-operative language mapping to generate a broader map of language function, in order to minimize the incidence of false-negative stimulation and permanent post-operative deficits.
ERIC Educational Resources Information Center
Al Dahhan, Noor Z.; Kirby, John R.; Brien, Donald C.; Munoz, Douglas P.
2017-01-01
Naming speed (NS) refers to how quickly and accurately participants name a set of familiar stimuli (e.g., letters). NS is an established predictor of reading ability, but controversy remains over why it is related to reading. We used three techniques (stimulus manipulations to emphasize phonological and/or visual aspects, decomposition of NS times…
The role of typography in differentiating look-alike/sound-alike drug names.
Gabriele, Sandra
2006-01-01
Until recently, when errors occurred in the course of caring for patients, blame was assigned to the healthcare professionals closest to the incident rather than examining the larger system and the actions that led up to the event. Now, the medical profession is embracing expertise and methodologies used in other fields to improve its own systems in relation to patient safety issues. This exploratory study, part of a Master's of Design thesis project, was a response to the problem of errors that occur due to confusion between look-alike/sound-alike drug names (medication names that have orthographic and/or phonetic similarities). The study attempts to provide a visual means to help differentiate problematic names using formal typographic and graphic cues. The FDA's Name Differentiation Project recommendations and other typographic alternatives were considered to address issues of attention and cognition. Eleven acute care nurses participated in testing that consisted of word-recognition tasks and questions intended to elicit opinions regarding the visual treatment of look-alike/sound-alike names in the context of a label prototype. Though limited in sample size, testing provided insight into the kinds of typographic differentiation that might be effective in a high-risk situation.
Direct-to-consumer advertising via the Internet: the role of Web site design.
Sewak, Saurabh S; Wilkin, Noel E; Bentley, John P; Smith, Mickey C
2005-06-01
Recent attempts to propose criteria for judging the quality of pharmaceutical and healthcare Web sites do not distinguish between attributes of Web site design related to content and other attributes not related to the content. The Elaboration Likelihood Model from persuasion literature is used as a framework for investigating the effects of Web site design on consequents like attitude and knowledge acquisition. A between-subjects, 2 (high or low involvement)x2 (Web site designed with high or low aspects of visual appeal) factorial design was used in this research. College students were randomly assigned to these treatment groups yielding a balanced design with 29 observations per treatment cell. Analysis of variance results for the effects of involvement and Web site design on attitude and knowledge indicated that the interaction between the independent variables was not significant in both analyses. Examination of main effects revealed that participants who viewed the Web site with higher visual appeal actually had slightly lower knowledge scores (6.32) than those who viewed the Web site with lower visual appeal (7.03, F(1,112)=3.827, P=.053). Results of this research seem to indicate that aspects of Web site design (namely aspects of visual appeal and quality) may not play a role in attaining desired promotional objectives, which can include development of favorable attitudes toward the product and facilitating knowledge acquisition.
An ERP study of recognition memory for concrete and abstract pictures in school-aged children
Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J.; Jacobson, Sandra W.; Jacobson, Joseph L.
2016-01-01
Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N = 96; mean age = 11.3 years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as “new” or “old” (i.e., previously displayed) compared to abstract pictures. ERPs were characterised by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependant on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. PMID:27329352
A foreground object features-based stereoscopic image visual comfort assessment model
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.
2014-11-01
Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.
Object-based attention underlies the rehearsal of feature binding in visual working memory.
Shen, Mowei; Huang, Xiang; Gao, Zaifeng
2015-04-01
Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.
Color Makes a Difference: Two-Dimensional Object Naming in Literate and Illiterate Subjects
ERIC Educational Resources Information Center
Reis, Alexandra; Faisca, Luis; Ingvar, Martin; Petersson, Karl Magnus
2006-01-01
Previous work has shown that illiterate subjects are better at naming two-dimensional representations of real objects when presented as colored photos as compared to black and white drawings. This raises the question if color or textural details selectively improve object recognition and naming in illiterate compared to literate subjects. In this…
Object Naming and Later Lexical Development: From Baby Bottle to Beer Bottle
ERIC Educational Resources Information Center
Ameel, Eef; Malt, Barbara; Storms, Gert
2008-01-01
Despite arguments for the relative ease of learning common noun meanings, semantic development continues well past the early years of language acquisition even for names of concrete objects. We studied evolution of the use of common nouns during later lexical development. Children aged 5-14 years and adults named common household objects and their…
Letter-case information and the identification of brand names.
Perea, Manuel; Jiménez, María; Talero, Fernanda; López-Cañada, Soraya
2015-02-01
A central tenet of most current models of visual-word recognition is that lexical units are activated on the basis of case-invariant abstract letter representations. Here, we examined this assumption by using a unique type of words: brand names. The rationale of the experiments is that brand names are archetypically printed either in lowercase (e.g., adidas) or uppercase (e.g., IKEA). This allows us to present the brand names in their standard or non-standard case configuration (e.g., adidas, IKEA vs. ADIDAS, ikea, respectively). We conducted two experiments with a brand-decision task ('is it a brand name?'): a single-presentation experiment and a masked priming experiment. Results in the single-presentation experiment revealed faster identification times of brand names in their standard case configuration than in their non-standard case configuration (i.e., adidas faster than ADIDAS; IKEA faster than ikea). In the masked priming experiment, we found faster identification times of brand names when they were preceded by an identity prime that matched its standard case configuration than when it did not (i.e., faster response times to adidas-adidas than to ADIDAS-adidas). Taken together, the present findings strongly suggest that letter-case information forms part of a brand name's graphemic information, thus posing some limits to current models of visual-word recognition. © 2014 The British Psychological Society.
Short-term memory binding deficits in Alzheimer's disease.
Parra, Mario A; Abrahams, Sharon; Fabi, Katia; Logie, Robert; Luzzi, Simona; Della Sala, Sergio
2009-04-01
Alzheimer's disease impairs long term memories for related events (e.g. faces with names) more than for single events (e.g. list of faces or names). Whether or not this associative or 'binding' deficit is also found in short-term memory has not yet been explored. In two experiments we investigated binding deficits in verbal short-term memory in Alzheimer's disease. Experiment 1: 23 patients with Alzheimer's disease and 23 age and education matched healthy elderly were recruited. Participants studied visual arrays of objects (six for healthy elderly and four for Alzheimer's disease patients), colours (six for healthy elderly and four for Alzheimer's disease patients), unbound objects and colours (three for healthy elderly and two for Alzheimer's disease patients in each of the two categories), or objects bound with colours (three for healthy elderly and two for Alzheimer's disease patients). They were then asked to recall the items verbally. The memory of patients with Alzheimer's disease for objects bound with colours was significantly worse than for single or unbound features whereas healthy elderly's memory for bound and unbound features did not differ. Experiment 2: 21 Alzheimer's disease patients and 20 matched healthy elderly were recruited. Memory load was increased for the healthy elderly group to eight items in the conditions assessing memory for single or unbound features and to four items in the condition assessing memory for the binding of these features. For Alzheimer's disease patients the task remained the same. This manipulation permitted the performance to be equated across groups in the conditions assessing memory for single or unbound features. The impairment in Alzheimer's disease patients in recalling bound objects reported in Experiment 1 was replicated. The binding cost was greater than that observed in the healthy elderly group, who did not differ in their performance for bound and unbound features. Alzheimer's disease grossly impairs the mechanisms responsible for holding integrated objects in verbal short-term memory.
Beyond sensory images: Object-based representation in the human ventral pathway
Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.
2004-01-01
We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396
A Cortical Network for the Encoding of Object Change
Hindy, Nicholas C.; Solomon, Sarah H.; Altmann, Gerry T.M.; Thompson-Schill, Sharon L.
2015-01-01
Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen alternative state of that object predicted the corresponding multivoxel pattern dissimilarity in early visual cortex during an imagery task, while late visual cortex patterns tracked dissimilarity among distinct objects. Early visual cortex pattern dissimilarity for object states in turn predicted the level of activation in an area of left posterior ventrolateral prefrontal cortex (pVLPFC) most responsive to conflict in a separate Stroop color-word interference task, and an area of left ventral posterior parietal cortex (vPPC) implicated in the relational binding of semantic features. We suggest that when visualizing object states, representational content instantiated across early and late visual cortex is modulated by processes in left pVLPFC and left vPPC that support selection and binding, and ultimately event comprehension. PMID:24127425
Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory
Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.
2013-01-01
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773
Optimization of Visual Information Presentation for Visual Prosthesis.
Guo, Fei; Yang, Yuan; Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.
Optimization of Visual Information Presentation for Visual Prosthesis
Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769
Coding the presence of visual objects in a recurrent neural network of visual cortex.
Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard
2007-01-01
Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.
Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY
2018-01-01
A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853
A rodent model for the study of invariant visual object recognition
Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.
2009-01-01
The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704
Spatial resolution in visual memory.
Ben-Shalom, Asaf; Ganel, Tzvi
2015-04-01
Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.
Invariant visual object recognition: a model, with lighting invariance.
Rolls, Edmund T; Stringer, Simon M
2006-01-01
How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.
Forder, Lewis; He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing.
He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426
NASA Astrophysics Data System (ADS)
Tufte, Lars; Trieschmann, Olaf; Carreau, Philippe; Hunsaenger, Thomas; Clayton, Peter J. S.; Barjenbruch, Ulrich
2004-02-01
The detection of accidentally or illegal marine oil discharges in the German territorial waters of the North Sea and Baltic Sea is of great importance for combating of oil spills and protection of the marine ecosystem. Therefore the German Federal Ministry of Transport set up an airborne surveillance system consisting of two Dornier DO 228-212 aircrafts equipped with a Side-Looking Airborne Radar (SLAR), a IR/UV sensor, a Microwave Radiometer (MWR) for quantification and a Laser-Flurosensor (LFS) for classification purposes of the oil spills. The flight parameters and the remote sensing data are stored in a database during the flight. A Pollution Observation Log is completed by the operator consisting of information about the detected oil spill (e.g. position, length, width) and several other information about the flight (e.g. name of navigator, name of observer). The objective was to develop an oil spill information system which integrates the described data, metadata and includes visualization and spatial analysis capabilities. The metadata are essential for further statistical analysis in spatial and temporal domains of oil spill occurrences and of the surveillance itself. It should facilitate the communication and distribution of metadata between the administrative bodies and partners of the German oil spill surveillance system. A connection between a GIS and the database allows to use the powerful visualization and spatial analysis functionality of the GIS in conjunction with the oil spill database.
Georgiou, George; Liu, Cuina; Xu, Shiyang
2017-08-01
Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.
Coelho Neto, José; Lisboa, Fernanda L C
2017-07-01
Viagra and Cialis are among the most counterfeited medicines in many parts of the world, including Brazil. Despite the many studies that have been made regarding discrimination between genuine and counterfeit samples, most published works do not contemplate generic and similar versions of these medicines and also do not explore excipients/adjuvants contributions when characterizing genuine and suspected samples. In this study, we present our findings in exploring ATR-FTIR spectral profiles for characterizing both genuine and questioned samples of several generic and brand-name sildenafil- and tadalafil-based tablets available on the Brazilian market, including Viagra and Cialis. Multi-component spectral matching (deconvolution), objective visual comparison and correlation tests were used during analysis. Besides from allowing simple and quick identification of counterfeits, results obtained evidenced the strong spectral similarities between generic and brand-named tablets employing the same active ingredient and the indistinguishability between samples produced by the same manufacturer, generic or not. For all sildenafil-based and some tadalafil-based tablets tested, differentiation between samples from different manufacturers, attributed to slight variations in excipients/adjuvants proportions, was achieved, thus allowing the possibility of tracing an unknown/unidentified tablet back to a specific manufacturer. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
On the Dynamics of Action Representations Evoked by Names of Manipulable Objects
ERIC Educational Resources Information Center
Bub, Daniel N.; Masson, Michael E. J.
2012-01-01
Two classes of hand action representations are shown to be activated by listening to the name of a manipulable object (e.g., cellphone). The functional action associated with the proper use of an object is evoked soon after the onset of its name, as indicated by primed execution of that action. Priming is sustained throughout the duration of the…
Young Children Are Sensitive to How an Object Was Created when Deciding What To Name It.
ERIC Educational Resources Information Center
Gelman, Susan A.; Bloom, Paul
2000-01-01
Examined how 3- and 5-year-olds and adults extend names for human-made artifacts. Found that even 3-year-olds were more likely to provide artifact names (e.g., "knife") when they believed objects were intentionally created and to provide material-based descriptions (e.g., "plastic") when they believed objects were accidentally…
The case of the missing visual details: Occlusion and long-term visual memory.
Williams, Carrick C; Burkle, Kyle A
2017-10-01
To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing the visible details in the former and the object's overall form in the latter. On a token discrimination test, surprisingly, memory for solid or stripe occluded objects at either encoding (Experiment 1) or test (Experiment 2) was the same. In contrast, when occluded objects matched at encoding and test (Experiment 3) or when the occlusion shifted, revealing the entire object piecemeal (Experiment 4), memory was better for solid compared with stripe occluded objects, indicating that objects are represented differently in long-term visual memory. Critically, we also found that when the task emphasized remembering exactly what was shown, memory performance in the more detailed solid occlusion condition exceeded that in the stripe condition (Experiment 5). However, when the task emphasized the whole object form, memory was better in the stripe condition (Experiment 6) than in the solid condition. We argue that long-term visual memory can represent objects flexibly, and task demands can interact with visual information, allowing the viewer to cope with changing real-world visual environments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The effects of perceptual priming on 4-year-olds' haptic-to-visual cross-modal transfer.
Kalagher, Hilary
2013-01-01
Four-year-old children often have difficulty visually recognizing objects that were previously experienced only haptically. This experiment attempts to improve their performance in these haptic-to-visual transfer tasks. Sixty-two 4-year-old children participated in priming trials in which they explored eight unfamiliar objects visually, haptically, or visually and haptically together. Subsequently, all children participated in the same haptic-to-visual cross-modal transfer task. In this task, children haptically explored the objects that were presented in the priming phase and then visually identified a match from among three test objects, each matching the object on only one dimension (shape, texture, or color). Children in all priming conditions predominantly made shape-based matches; however, the most shape-based matches were made in the Visual and Haptic condition. All kinds of priming provided the necessary memory traces upon which subsequent haptic exploration could build a strong enough representation to enable subsequent visual recognition. Haptic exploration patterns during the cross-modal transfer task are discussed and the detailed analyses provide a unique contribution to our understanding of the development of haptic exploratory procedures.
A Visual Profile of Queensland Indigenous Children.
Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M
2016-03-01
Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying children with CI and reduced visual information processing given the potential effect of these conditions on school performance.
Wheat, Katherine L; Cornelissen, Piers L; Sack, Alexander T; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
2013-05-01
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within ∼100ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we used online transcranial magnetic stimulation (TMS) to investigate whether LIFGpo/PCG is necessary for (not just correlated with) visual word recognition by ∼100ms. Pulses were delivered to individually fMRI-defined LIFGpo/PCG in Dutch speakers 75-500ms after stimulus onset during reading and picture naming. Reading and picture naming reactions times were significantly slower following pulses at 225-300ms. Contrary to predictions, there was no disruption to reading for pulses before 225ms. This does not provide evidence in favour of a functional role for LIFGpo/PCG in reading before 225ms in this case, but does extend previous findings in picture stimuli to written Dutch words. Copyright © 2012 Elsevier Inc. All rights reserved.
On the Nature of Verb-Noun Dissociations in Bilectal SLI: A Psycholinguistic Perspective from Greek
ERIC Educational Resources Information Center
Kambanaros, Maria; Grohmann, Kleanthes K.; Michaelides, Michalis; Theodorou, Eleni
2014-01-01
We report on object and action picture-naming accuracy in two groups of bilectal speakers in Cyprus, children with typical language development (TLD) and children with specific language impairment (SLI). Object names were overall better retrieved than action names by both groups. Given that comprehension for action names was relatively intact for…
The role of colour in implicit and explicit memory performance.
Vernon, David; Lloyd-Jones, Toby J
2003-07-01
We present two experiments that examine the effects of colour transformation between study and test (from black and white to colour and vice versa, of from incorrectly coloured to correctly coloured and vice versa) on implicit and explicit measures of memory for diagnostically coloured natural objects (e.g., yellow banana). For naming and coloured-object decision (i.e., deciding whether an object is correctly coloured), there were shorter response times to correctly coloured-objects than to black-and-white and incorrectly coloured-objects. Repetition priming was equivalent for the different stimulus types. Colour transformation did not influence priming of picture naming, but for coloured-object decision priming was evident only for objects remaining the same from study to test. This was the case for both naming and coloured-object decision as study tasks. When participants were asked to consciously recognize objects that they had named or made coloured-object decisions to previously, whilst ignoring their colour, colour transformation reduced recognition efficiency. We discuss these results in terms of the flexibility of object representations that mediate priming and recognition.
Neural representation of objects in space: a dual coding account.
Humphreys, G W
1998-01-01
I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227
Grubert, Anna; Eimer, Martin
2015-11-11
During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu
2018-04-01
Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
Object formation in visual working memory: Evidence from object-based attention.
Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei
2016-09-01
We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.
Visual discrimination in an orangutan (Pongo pygmaeus): measuring visual preference.
Hanazuka, Yuki; Kurotori, Hidetoshi; Shimizu, Mika; Midorikawa, Akira
2012-04-01
Although previous studies have confirmed that trained orangutans visually discriminate between mammals and artificial objects, whether orangutans without operant conditioning can discriminate remains unknown. The visual discrimination ability in an orangutan (Pongo pygmaeus) with no experience in operant learning was examined using measures of visual preference. Sixteen color photographs of inanimate objects and of mammals with four legs were randomly presented to an orangutan. The results showed that the mean looking time at photographs of mammals with four legs was longer than that for inanimate objects, suggesting that the orangutan discriminated mammals with four legs from inanimate objects. The results implied that orangutans who have not experienced operant conditioning may possess the ability to discriminate visually.
A subjective study and an objective metric to quantify the granularity level of textures
NASA Astrophysics Data System (ADS)
Subedar, Mahesh M.; Karam, Lina J.
2015-03-01
Texture granularity is an important visual characteristic that is useful in a variety of applications, including analysis, recognition, and compression, to name a few. A texture granularity measure can be used to quantify the perceived level of texture granularity. The granularity level of the textures is influenced by the size of the texture primitives. A primitive is defined as the smallest recognizable repetitive object in the texture. If the texture has large primitives then the perceived granularity level tends to be lower as compared to a texture with smaller primitives. In this work we are presenting a texture granularity database referred as GranTEX which consists of 30 textures with varying levels of primitive sizes and granularity levels. The GranTEX database consists of both natural and man-made textures. A subjective study is conducted to measure the perceived granularity level of textures present in the GranTEX database. An objective metric that automatically measures the perceived granularity level of textures is also presented as part of this work. It is shown that the proposed granularity metric correlates well with the subjective granularity scores.
Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław
2014-01-01
“SmartMonitor” is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the “SmartMonitor” system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons. PMID:24905854
Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition
Craddock, Matt; Lawson, Rebecca
2009-01-01
A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685
Face imagery is based on featural representations.
Lobmaier, Janek S; Mast, Fred W
2008-01-01
The effect of imagery on featural and configural face processing was investigated using blurred and scrambled faces. By means of blurring, featural information is reduced; by scrambling a face into its constituent parts configural information is lost. Twenty-four participants learned ten faces together with the sound of a name. In following matching-to-sample tasks participants had to decide whether an auditory presented name belonged to a visually presented scrambled or blurred face in two experimental conditions. In the imagery condition, the name was presented prior to the visual stimulus and participants were required to imagine the corresponding face as clearly and vividly as possible. In the perception condition name and test face were presented simultaneously, thus no facilitation via mental imagery was possible. Analyses of the hit values showed that in the imagery condition scrambled faces were recognized significantly better than blurred faces whereas there was no such effect for the perception condition. The results suggest that mental imagery activates featural representations more than configural representations.
What and where information in the caudate tail guides saccades to visual objects
Yamamoto, Shinya; Monosov, Ilya E.; Yasuda, Masaharu; Hikosaka, Okihide
2012-01-01
We understand the world by making saccadic eye movements to various objects. However, it is unclear how a saccade can be aimed at a particular object, because two kinds of visual information, what the object is and where it is, are processed separately in the dorsal and ventral visual cortical pathways. Here we provide evidence suggesting that a basal ganglia circuit through the tail of the monkey caudate nucleus (CDt) guides such object-directed saccades. First, many CDt neurons responded to visual objects depending on where and what the objects were. Second, electrical stimulation in the CDt induced saccades whose directions matched the preferred directions of neurons at the stimulation site. Third, many CDt neurons increased their activity before saccades directed to the neurons’ preferred objects and directions in a free-viewing condition. Our results suggest that CDt neurons receive both ‘what’ and ‘where’ information and guide saccades to visual objects. PMID:22875934
Shape and color conjunction stimuli are represented as bound objects in visual working memory.
Luria, Roy; Vogel, Edward K
2011-05-01
The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R
2014-07-01
Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.
An insect-inspired model for visual binding II: functional analysis and visual attention.
Northcutt, Brandon D; Higgins, Charles M
2017-04-01
We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.
Soble, Jason R; Marceaux, Janice C; Galindo, Juliette; Sordahl, Jeffrey A; Highsmith, Jonathan M; O'Rourke, Justin J F; González, David Andrés; Critchfield, Edan A; McCoy, Karin J M
2016-01-01
Confrontation naming tests are a common neuropsychological method of assessing language and a critical diagnostic tool in identifying certain neurodegenerative diseases; however, there is limited literature examining the visual-perceptual demands of these tasks. This study investigated the effect of perceptual reasoning abilities on three confrontation naming tests, the Boston Naming Test (BNT), Neuropsychological Assessment Battery (NAB) Naming Test, and Visual Naming Test (VNT) to elucidate the diverse cognitive functions underlying these tasks to assist with test selection procedures and increase diagnostic accuracy. A mixed clinical sample of 121 veterans were administered the BNT, NAB, VNT, and Wechsler Adult Intelligence Scale-4th Edition (WAIS-IV) Verbal Comprehension Index (VCI) and Perceptual Reasoning Index (PRI) as part of a comprehensive neuropsychological evaluation. Multiple regression indicated that PRI accounted for 23%, 13%, and 15% of the variance in BNT, VNT, and NAB scores, respectively, but dropped out as a significant predictor once VCI was added. Follow-up bootstrap mediation analyses revealed that PRI had a significant indirect effect on naming performance after controlling education, primary language, and severity of cognitive impairment, as well as the mediating effect of general verbal abilities for the BNT (B = 0.13; 95% confidence interval, CI [.07, .20]), VNT (B = 0.01; 95% CI [.002, .03]), and NAB (B = 0.03; 95% CI [.01, .06]). Findings revealed a complex relationship between perceptual reasoning abilities and confrontation naming that is mediated by general verbal abilities. However, when verbal abilities were statistically controlled, perceptual reasoning abilities were found to have a significant indirect effect on performance across all three confrontation naming measures with the largest effect noted with the BNT relative to the VNT and NAB Naming Test.
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.
2015-01-01
Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766
Sereno, Anne B.; Lehky, Sidney R.
2011-01-01
Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”). PMID:21344010
Seghier, Mohamed L; Hope, Thomas M H; Prejawa, Susan; Parker Jones, 'Ōiwi; Vitkovitch, Melanie; Price, Cathy J
2015-03-18
The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level. Copyright © 2015 Seghier et al.
Scene and Position Specificity in Visual Memory for Objects
ERIC Educational Resources Information Center
Hollingworth, Andrew
2006-01-01
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object…
Action and object word writing in a case of bilingual aphasia.
Kambanaros, Maria; Messinis, Lambros; Anyfantis, Emmanouil
2012-01-01
We report the spoken and written naming of a bilingual speaker with aphasia in two languages that differ in morphological complexity, orthographic transparency and script Greek and English. AA presented with difficulties in spoken picture naming together with preserved written picture naming for action words in Greek. In English, AA showed similar performance across both tasks for action and object words, i.e. difficulties retrieving action and object names for both spoken and written naming. Our findings support the hypothesis that cognitive processes used for spoken and written naming are independent components of the language system and can be selectively impaired after brain injury. In the case of bilingual speakers, such processes impact on both languages. We conclude grammatical category is an organizing principle in bilingual dysgraphia.
Stepping Stones to Literacy. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2007
2007-01-01
Stepping Stones to Literacy (SSL) is a supplemental curriculum designed to promote listening, print conventions, phonological awareness, phonemic awareness, and serial processing/rapid naming (quickly naming familiar visual symbols and stimuli such as letters or colors). The program targets kindergarten and older preschool students considered to…
Cognitive Changes in Presymptomatic Parkinson’s Disease
2004-09-01
frontation naming (e.g., Boston Naming Test), verbal flu- Tramadol (analgesic) 1 ency (e.g., COWA), or memory for either verbal or visual- Benzodiazapine...underlie mental rotation deficits in PD. Men help elucidate the nature of these relationships. typically perform better than women on tests of mental
Blueprint for the Diagnosis of Difficulties with Cardinality.
ERIC Educational Resources Information Center
Dunlap, William P.; Brennen, Alison H.
1981-01-01
The article describes a diagnostic procedure for assessing children's mental images and knowledge of cardinal numbers, 0 through 9. The diagnostic procedure includes the assessment of a child's visual memory, visual perception, symbol recognition, oral naming of numerals, and symbol-set linkage. (Author/SBH)
Perceived object stability depends on multisensory estimates of gravity.
Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H
2011-04-27
How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.
The Effects of Visual Complexity for Japanese Kanji Processing with High and Low Frequencies
ERIC Educational Resources Information Center
Tamaoka, Katsuo; Kiyama, Sachiko
2013-01-01
The present study investigated the effects of visual complexity for kanji processing by selecting target kanji from different stroke ranges of visually simple (2-6 strokes), medium (8-12 strokes), and complex (14-20 strokes) kanji with high and low frequencies. A kanji lexical decision task in Experiment 1 and a kanji naming task in Experiment 2…
ERIC Educational Resources Information Center
Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.
2011-01-01
Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…
The same-location cost is unrelated to attentional settings: an object-updating account.
Carmel, Tomer; Lamy, Dominique
2014-08-01
What mechanisms allow us to ignore salient yet irrelevant visual information has been a matter of intense debate. According to the contingent-capture hypothesis, such information is filtered out, whereas according to the salience-based account, it captures attention automatically. Several recent studies have reported a same-location cost that appears to fit neither of these accounts. These showed that responses may actually be slower when the target appears at the location just occupied by an irrelevant singleton distractor. Here, we investigated the mechanisms underlying this same-location cost. Our findings show that the same-location cost is unrelated to automatic attentional capture or strategic setting of attentional priorities, and therefore invalidate the feature-based inhibition and fast attentional disengagement accounts of this effect. In addition, we show that the cost is wiped out when the cue and target are not perceived as parts of the same object. We interpret these findings as indicating that the same-location cost has been previously misinterpreted by both bottom-up and top-down theories of attentional capture. We propose that it is better understood as a consequence of object updating, namely, as the cost of updating the information stored about an object when this object changes across time.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
NASA Astrophysics Data System (ADS)
Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad
2010-02-01
A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.
Salience of the lambs: a test of the saliency map hypothesis with pictures of emotive objects.
Humphrey, Katherine; Underwood, Geoffrey; Lambert, Tony
2012-01-25
Humans have an ability to rapidly detect emotive stimuli. However, many emotional objects in a scene are also highly visually salient, which raises the question of how dependent the effects of emotionality are on visual saliency and whether the presence of an emotional object changes the power of a more visually salient object in attracting attention. Participants were shown a set of positive, negative, and neutral pictures and completed recall and recognition memory tests. Eye movement data revealed that visual saliency does influence eye movements, but the effect is reliably reduced when an emotional object is present. Pictures containing negative objects were recognized more accurately and recalled in greater detail, and participants fixated more on negative objects than positive or neutral ones. Initial fixations were more likely to be on emotional objects than more visually salient neutral ones, suggesting that the processing of emotional features occurs at a very early stage of perception.
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Decoding visual object categories in early somatosensory cortex.
Smith, Fraser W; Goodale, Melvyn A
2015-04-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.
Decoding Visual Object Categories in Early Somatosensory Cortex
Smith, Fraser W.; Goodale, Melvyn A.
2015-01-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136
The development of newborn object recognition in fast and slow visual worlds
Wood, Justin N.; Wood, Samantha M. W.
2016-01-01
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world. PMID:27097925
Infant Visual Attention and Object Recognition
Reynolds, Greg D.
2015-01-01
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333
Grossberg, Stephen
2014-01-01
Neural models of perception clarify how visual illusions arise from adaptive neural processes. Illusions also provide important insights into how adaptive neural processes work. This article focuses on two illusions that illustrate a fundamental property of global brain organization; namely, that advanced brains are organized into parallel cortical processing streams with computationally complementary properties. That is, in order to process certain combinations of properties, each cortical stream cannot process complementary properties. Interactions between these streams, across multiple processing stages, overcome their complementary deficiencies to compute effective representations of the world, and to thereby achieve the property of complementary consistency. The two illusions concern how illusory depth can vary with brightness, and how apparent motion of illusory contours can occur. Illusory depth from brightness arises from the complementary properties of boundary and surface processes, notably boundary completion and surface-filling in, within the parvocellular form processing cortical stream. This illusion depends upon how surface contour signals from the V2 thin stripes to the V2 interstripes ensure complementary consistency of a unified boundary/surface percept. Apparent motion of illusory contours arises from the complementary properties of form and motion processes across the parvocellular and magnocellular cortical processing streams. This illusion depends upon how illusory contours help to complete boundary representations for object recognition, how apparent motion signals can help to form continuous trajectories for target tracking and prediction, and how formotion interactions from V2-to-MT enable completed object representations to be continuously tracked even when they move behind intermittently occluding objects through time. PMID:25389399
’What’ and ’Where’ in Visual Attention: Evidence from the Neglect Syndrome
1992-01-01
representations of the visual world, visual attention, and object representations. 24 Bauer, R. M., & Rubens, A. B. (1985). Agnosia . In K. M. Heilman, & E...visual information. Journal of Experimental Psychology: General, 1-1, 501-517. Farah, M. J. (1990). Visual Agnosia : Disorders of Object Recognition and
Odours reduce the magnitude of object substitution masking for matching visual targets in females.
Robinson, Amanda K; Laning, Julia; Reinhard, Judith; Mattingley, Jason B
2016-08-01
Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.
The Case of the Missing Visual Details: Occlusion and Long-Term Visual Memory
ERIC Educational Resources Information Center
Williams, Carrick C.; Burkle, Kyle A.
2017-01-01
To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing…
Perceptual asymmetries in greyscales: object-based versus space-based influences.
Thomas, Nicole A; Elias, Lorin J
2012-05-01
Neurologically normal individuals exhibit leftward spatial biases, resulting from object- and space-based biases; however their relative contributions to the overall bias remain unknown. Relative position within the display has not often been considered, with similar spatial conditions being collapsed across. Study 1 used the greyscales task to investigate the influence of relative position and object- and space-based contributions. One image in each greyscale pair was shifted towards the left or the right. A leftward object-based bias moderated by a bias to the centre was expected. Results confirmed this as a left object-based bias occurred in the right visual field, where the left side of the greyscale pairs was located in the centre visual field. Further, only lower visual field images exhibited a significant left bias in the left visual field. The left bias was also stronger when images were partially overlapping in the right visual field, demonstrating the importance of examining proximity. The second study examined whether object-based biases were stronger when actual objects, with directional lighting biases, were used. Direction of luminosity was congruent or incongruent with spatial location. A stronger object-based bias emerged overall; however a leftward bias was seen in congruent conditions and a rightward bias was seen in incongruent conditions. In conditions with significant biases, the lower visual field image was chosen most often. Results show that object- and space-based biases both contribute; however stimulus type allows either space- or object-based biases to be stronger. A lower visual field bias also interacts with these biases, leading the left bias to be eliminated under certain conditions. The complex interaction occurring between frame of reference and visual field makes spatial location extremely important in determining the strength of the leftward bias. Copyright © 2010 Elsevier Srl. All rights reserved.
Cross-sensory correspondences and symbolism in spoken and written language.
Walker, Peter
2016-09-01
Lexical sound symbolism in language appears to exploit the feature associations embedded in cross-sensory correspondences. For example, words incorporating relatively high acoustic frequencies (i.e., front/close rather than back/open vowels) are deemed more appropriate as names for concepts associated with brightness, lightness in weight, sharpness, smallness, speed, and thinness, because higher pitched sounds appear to have these cross-sensory features. Correspondences also support prosodic sound symbolism. For example, speakers might raise the fundamental frequency of their voice to emphasize the smallness of the concept they are naming. The conceptual nature of correspondences and their functional bidirectionality indicate they should also support other types of symbolism, including a visual equivalent of prosodic sound symbolism. For example, the correspondence between auditory pitch and visual thinness predicts that a typeface with relatively thin letter strokes will reinforce a word's reference to a relatively high pitch sound (e.g., squeal). An initial rating study confirms that the thinness-thickness of a typeface's letter strokes accesses the same cross-sensory correspondences observed elsewhere. A series of speeded word classification experiments then confirms that the thinness-thickness of letter strokes can facilitate a reader's comprehension of the pitch of a sound named by a word (thinner letter strokes being appropriate for higher pitch sounds), as can the brightness of the text (e.g., white-on-gray text being appropriate for the names of higher pitch sounds). It is proposed that the elementary visual features of text are represented in the same conceptual system as word meaning, allowing cross-sensory correspondences to support visual symbolism in language. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
2017-01-01
Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what” and “where” information in both pathways. Is the two-pathway distinction still valid? Here, we examined how goal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments on human participants (57% females), by manipulating whether color or shape was task-relevant and how they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions. We found that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however, tended to decrease as task-relevant and irrelevant features were more integrated, reflecting the well-known object-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual information representation. SIGNIFICANCE STATEMENT Visual information is thought to be processed in two distinctive pathways: the ventral pathway that processes “what” an object is and the dorsal pathway that processes “where” it is located. This view has been challenged by recent studies revealing the existence of “what” and “where” information in both pathways. Here, we found that goal-directed visual information processing differentially modulates shape-based object category representations in the two pathways. Whereas ventral representations are more invariant to the demand of the task, reflecting what an object is, dorsal representations are more adaptive, reflecting what we do with the object. Thus, despite the existence of “what” and “where” information in both pathways, visual representations may still differ fundamentally in the two pathways. PMID:28821655
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Cognitive components of picture naming.
Johnson, C J; Paivio, A; Clark, J M
1996-07-01
A substantial research literature documents the effects of diverse item attributes, task conditions, and participant characteristics on the case of picture naming. The authors review what the research has revealed about 3 generally accepted stages of naming a pictured object: object identification, name activation, and response generation. They also show that dual coding theory gives a coherent and plausible account of these findings without positing amodal conceptual representations, and they identify issues and methods that may further advance the understanding of picture naming and related cognitive tasks.
Topolinski, Sascha; Boecker, Lea
2016-04-01
We explored the impact of consonantal articulation direction of names for foods on expected palatability for these foods (total N = 256). Dishes (Experiments 1-2) and food items (Experiment 3) were labeled with names whose consonants either wandered from the front to the back of the mouth (inward, e.g., PASOKI) or from the back to the front of the mouth (outward; e.g., KASOPI). Because inward (outward) wandering consonant sequences trigger eating-like (expectoration-like) mouth movements, dishes and foods were rated higher in palatability when they bore an inward compared to an outward wandering name. This effect occurred already under silent reading and for hungry and satiated participants alike. As a boundary condition, this articulation effect did occur when also additional visual information on the product was given (Experiment 3), but vanished when this visual information was too vivid and rich in competing palatability cues (Experiment 2). Future marketing can exploit this effect by increasing the appeal of food products by using inward wandering brand names, that is, names that start with the lips and end in the throat. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Development of Individuation in Autism
ERIC Educational Resources Information Center
O'Hearn, Kirsten; Franconeri, Steven; Wright, Catherine; Minshew, Nancy; Luna, Beatriz
2013-01-01
Evidence suggests that people with autism rely less on holistic visual information than typical adults. The current studies examine this by investigating core visual processes that contribute to holistic processing--namely, individuation and element grouping--and how they develop in participants with autism and typically developing (TD)…
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Whipple, Brittany D; Nelson, Jason M
2016-02-01
This study investigated the performance of adolescents and young adults with Attention Deficit Hyperactivity Disorder (ADHD), Reading Disorder (RD), and ADHD/RD on measures of alphanumeric and nonalphanumeric naming speed and the relationship between naming speed and academic achievement. The sample (N = 203) included students aged 17-28 years diagnosed with ADHD (n = 83), RD (n = 71), or ADHD/RD (n = 49). Individuals with ADHD performed significantly faster on measures of alphanumeric naming compared with RD and comorbid groups and, within group, demonstrated significantly quicker naming of letters/digits compared with colors/objects. Both alphanumeric rapid naming scores and processing speed scores variably predicted academic achievement scores across groups, whereas nonalphanumeric rapid naming only predicted reading comprehension scores within the ADHD group. Results support findings that older individuals with ADHD show relative weakness in rapid naming of objects and colors. Implications of these findings in regard to assessment of older individuals for ADHD are discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Pérez, Miguel A
2007-01-01
The aim of this study was to address the effect of objective age of acquisition (AoA) on picture-naming latencies when different measures of frequency (cumulative and adult word frequency) and frequency trajectory are taken into account. A total of 80 Spanish participants named a set of 178 pictures. Several multiple regression analyses assessed the influence of AoA, word frequency, frequency trajectory, object familiarity, name agreement, image agreement, image variability, name length, and orthographic neighbourhood density on naming times. The results revealed that AoA is the main predictor of picture-naming times. Cumulative frequency and adult word frequency (written or spoken) appeared as important factors in picture naming, but frequency trajectory and object familiarity did not. Other significant variables were image agreement, image variability, and neighbourhood density. These results (a) provide additional evidence of the predictive power of AoA in naming times independent of word-frequency and (b) suggest that image variability and neighbourhood density should also be taken into account in models of lexical production.
An insect-inspired model for visual binding I: learning objects and their characteristics.
Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M
2017-04-01
Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.
Visual Sensitivities and Discriminations and Their Roles in Aviation.
1986-03-01
D. Low contrast letter charts in early diabetic retinopathy , octrlar hypertension, glaucoma and Parkinson’s disease. Br J Ophthalmol, 1984, 68, 885...to detect a camouflaged object that was visible only when moving, and compared these data with similar measurements for conventional objects that were...3) Compare visual detection (i.e. visual acquisition) of camouflaged objects whose edges are defined by velocity differences with visual detection
Objective Measures of Visual Function in Papilledema
Moss, Heather E.
2016-01-01
Synopsis Visual function is an important parameter to consider when managing patients with papilledema. Though the current standard of care uses standard automated perimetry (SAP) to obtain this information, this test is inherently subjective and prone to patient errors. Objective visual function tests including the visual evoked potential, pattern electroretinogram, photopic negative response of the full field electroretinogram, and pupillary light response have the potential to replace or supplement subjective visual function tests in papilledema management. This article reviews the evidence for use of objective visual function tests to assess visual function in papilledema and discusses future investigations needed to develop them as clinically practical and useful measures for this purpose. PMID:28451649
Hastings, Gareth D.; Marsack, Jason D.; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A.
2017-01-01
Purpose To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Methods Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. Results For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ±SD was −0.06 ±0.04 with both refractions; dilated was −0.05 ±0.04 with the objective, and −0.05 ±0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. Conclusions A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. PMID:28370389
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-22
... event-related objects that display the brand name, logo, or selling message of smokeless tobacco... vehicles and other event-related objects bearing smokeless tobacco brand names, logos, or selling messages... using brand names, logos, or selling messages. Given these legislative and likely regulatory changes...
A revised and updated catalog of quasi-stellar objects
NASA Technical Reports Server (NTRS)
Hewitt, A.; Burbidge, G.
1993-01-01
The paper contains a catalog of all known quasi-stellar objects (QSOs) with measured emission redshifts, and BL Lac objects, complete to 1992 December 31. The catalog contains 7315 objects, nearly all QSOs including about 90 BL Lac objects. The catalog and references contain extensive information on names, positions, magnitudes, colors, emission-line redshifts, absorption, variability, polarization, and X-ray, radio, and infrared data. A key in the form of subsidiary tables enables the reader to relate the name of a given object to its coordinate name, which is used throughout the compilation. Plots of the Hubble diagram, the apparent magnitude distribution, the emission redshift distribution, and the distribution of the QSOs on the sky are also given.
An ERP study of recognition memory for concrete and abstract pictures in school-aged children.
Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J; Jacobson, Sandra W; Jacobson, Joseph L
2016-08-01
Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N=96; mean age=11.3years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as "new" or "old" (i.e., previously displayed) compared to abstract pictures. ERPs were characterized by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependent on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. Copyright © 2016 Elsevier B.V. All rights reserved.
A visual model for object detection based on active contours and level-set method.
Satoh, Shunji
2006-09-01
A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Diesfeldt, H F A
2011-06-01
A right-handed patient, aged 72, manifested alexia without agraphia, a right homonymous hemianopia and an impaired ability to identify visually presented objects. He was completely unable to read words aloud and severely deficient in naming visually presented letters. He responded to orthographic familiarity in the lexical decision tasks of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) rather than to the lexicality of the letter strings. He was impaired at deciding whether two letters of different case (e.g., A, a) are the same, though he could detect real letters from made-up ones or from their mirror image. Consequently, his core deficit in reading was posited at the level of the abstract letter identifiers. When asked to trace a letter with his right index finger, kinesthetic facilitation enabled him to read letters and words aloud. Though he could use intact motor representations of letters in order to facilitate recognition and reading, the slow, sequential and error-prone process of reading letter by letter made him abandon further training.
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
When apperceptive agnosia is explained by a deficit of primary visual processing.
Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta
2014-03-01
Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Neuroanatomical affiliation visualization-interface system.
Palombi, Olivier; Shin, Jae-Won; Watson, Charles; Paxinos, George
2006-01-01
A number of knowledge management systems have been developed to allow users to have access to large quantity of neuroanatomical data. The advent of three-dimensional (3D) visualization techniques allows users to interact with complex 3D object. In order to better understand the structural and functional organization of the brain, we present Neuroanatomical Affiliations Visualization-Interface System (NAVIS) as the original software to see brain structures and neuroanatomical affiliations in 3D. This version of NAVIS has made use of the fifth edition of "The Rat Brain in Stereotaxic coordinates" (Paxinos and Watson, 2005). The NAVIS development environment was based on the scripting language name Python, using visualization toolkit (VTK) as 3D-library and wxPython for the graphic user interface. The following manuscript is focused on the nucleus of the solitary tract (Sol) and the set of affiliated structures in the brain to illustrate the functionality of NAVIS. The nucleus of the Sol is the primary relay center of visceral and taste information, and consists of 14 distinct subnuclei that differ in cytoarchitecture, chemoarchitecture, connections, and function. In the present study, neuroanatomical projection data of the rat Sol were collected from selected literature in PubMed since 1975. Forty-nine identified projection data of Sol were inserted in NAVIS. The standard XML format used as an input for affiliation data allows NAVIS to update data online and/or allows users to manually change or update affiliation data. NAVIS can be extended to nuclei other than Sol.
NASA Astrophysics Data System (ADS)
Tohar, Ibrahim; Hardiman, Gagoek; Ratih Sari, Suzanna
2017-12-01
Keraton Yogyakarta as a summit of Javanese culture has been renowned as a heritage building. As object of study, Keraton Yogyakarta is ornamented with a collection of architectural artifacts. The acculturation and merging of these different styles create a unique impression within the palace complex. This study aims to identify the pattern of acculturation of these two styles and to interpret their meaning and expression. A descriptive-qualitative method is employed in this research, which contains visual observation, documentation collection, interviews with informants, and relevant literature review. As results of study, the expression of Tratag Pagelaran, Tratag Sitihinggil, Bangsal Ponconiti, and Gedong Jene tends to widen, while the expression of Gedong Purwaretna tends to uprise. Every building has its own point of interest and ornamentation which its place and content are different.. In visual observations, there are two categories of buildings in Keraton Yogyakarta,which accommodate two styles, namely Javanese Traditional style and Dutch Colonial style. Buildings of Javanese traditional style, which hold a special concept of shading, were built without buttresses and embody a ‘light’ expression, while buildings of Dutch Colonial style, which hold a concept of protection, were built with massive enclosure and produce a “heavy” expression. Although visually split into two distinct styles, the acculturation process in Keraton Yogyakarta produced a unity in its overall expression. The expression pattern of Keraton Yogyakarta can be used as conservation guidance of Javanese-cultured city.
Seymour, K J; Williams, M A; Rich, A N
2016-05-01
Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Robust selectivity to two-object images in human visual cortex
Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105
Spatiotemporal dynamics underlying object completion in human ventral visual cortex.
Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2014-08-06
Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.
[Hyperlexia in an adult patient with lesions in the left medial frontal lobe].
Suzuki, K; Yamadori, A; Kumabe, T; Endo, K; Fujii, T; Yoshimoto, T
2000-04-01
A 69-year-old right-handed woman developed a transcortical motor aphasia with hyperlexia following resection of a glioma in the left medial frontal lobe. Neurological examination revealed grasp reflex in the right hand and underutilization of the right upper extremity. An MRI demonstrated lesions in the left medial frontal lobe including the supplementary motor area and the anterior part of the cingulate gyrus, which extended to the anterior part of the body of corpus callosum. Neuropsychologically she was alert and cooperative. She demonstrated transcortical motor aphasia. Her verbal output began with echolalia. Furthermore hyperlexia was observed in daily activities and during examinations. During conversation she suddenly read words written on objects around her which were totally irrelevant to the talk. When she was walking in the ward with an examiner she read words written on a trash bag that passed by and signboards which indicated a name of a room. Her conversation while walking was intermingled with reading words, which was irrelevant to the conversation. She also read time on analog clocks, which were hung on a wall in a watch store. In a naming task, she read words written on objects first and named them upon repeated question about their names. When an examiner opened a newspaper in front of her without any instructions she began reading until the examiner prohibited it. Then she began reading again when an examiner turned the page, although she remembered that she should not read it aloud. She showed mild ideomotor apraxia of a left hand. Utilization behavior, imitation behavior, hypergraphia, or compulsive use of objects was not observed throughout the course. Hyperlexic tendency is a prominent feature of this patient's language output. Hyperlexia was often reported in children with pervasive developmental disorders including autism. There are only a few reports about hyperlexia in adults and some of them were related to diffuse brain dysfunction. Hyperlexia of our patient was associated with echolalia but not with the other "echo" phenomena, which may be because the lesion was unilateral on the left side. Dysfunction of the left supplementary motor area could lead to disinhibition of regulatory mechanism of verbal output in response to auditory and visual stimuli.
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Short-Term Memory Coding in Children with Intellectual Disabilities
ERIC Educational Resources Information Center
Henry, Lucy
2008-01-01
To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and…
Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?
ERIC Educational Resources Information Center
Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno
2012-01-01
The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…
Medications Used by Students with Visual and Hearing Impairments: Implications for Teachers.
ERIC Educational Resources Information Center
Kelley, Pat; And Others
This document presents summary information in chart form on medications used by students with visual and hearing impairments. First, a checklist identifies educational considerations for students who are medicated. Next, common antipsychotic, anticonvulsant, antiasthmatic and other drugs are listed in chart form with drug name, indications, peak…
The Conceptual Grouping Effect: Categories Matter (and Named Categories Matter More)
ERIC Educational Resources Information Center
Lupyan, Gary
2008-01-01
Do conceptual categories affect basic visual processing? A conceptual grouping effect for familiar stimuli is reported using a visual search paradigm. Search through conceptually-homogeneous non-targets was faster and more efficient than search through conceptually-heterogeneous non-targets. This effect cannot be attributed to perceptual factors…
Sigurdardottir, Heida M.; Sheinberg, David L.
2015-01-01
The lateral intraparietal area (LIP) of the dorsal visual stream is thought to play an important role in visually directed orienting, or the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand how and to what extent short-term and long-term experience with visual orienting can determine the nature of responses of LIP neurons to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred peripheral spatial location of a neuron. For some objects the training lasted for less than a single day, while for other objects the training lasted for several months. We found that neural responses to visual objects are affected both by such short-term and long-term experience, but that the length of the learning period determines exactly how this neural plasticity manifests itself. Short-term learning over the course of a single training session affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the neural responses to newly learned objects start to resemble those of familiar over-learned objects that share their meaning or arbitrary association. Long-term learning, on the other hand, affects the earliest and apparently bottom-up responses to visual objects. These responses tend to be greater for objects that have repeatedly been associated with looking toward, rather than away from, LIP neurons’ preferred spatial locations. Responses to objects can nonetheless be distinct even though the objects have both been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore also indicate that a complete experience-driven override of LIP object responses is difficult or impossible. PMID:25633647
[Hemispheric differences in letter matching of hiragana and katakana].
Iizuka, K; Sato, H
1992-07-01
The purpose of the present study was to examine the hemispheric differences in letter matching of hiragana and katakana. The stimuli with a pair of each one letter of hiragana and katakana were presented unilaterally to the right or left visual hemifield with a tachistoscope. The subjects were 40 male right handers. They were required to judge whether a pair of letters had the same name or different one. A significant right visual hemifield superiority was observed for both the accuracy of recognition and reaction time. The results suggest that the callosal relay model of Zaidel may be applied to the name matching task.
The modulating effect of education on semantic interference during healthy aging.
Paolieri, Daniela; Marful, Alejandra; Morales, Luis; Bajo, María Teresa
2018-01-01
Aging has traditionally been related to impairments in name retrieval. These impairments have usually been explained by a phonological transmission deficit hypothesis or by an inhibitory deficit hypothesis. This decline can, however, be modulated by the educational level of the sample. This study analyzed the possible role of these approaches in explaining both object and face naming impairments during aging. Older adults with low and high educational level and young adults with high educational level were asked to repeatedly name objects or famous people using the semantic-blocking paradigm. We compared naming when exemplars were presented in a semantically homogeneous or in a semantically heterogeneous context. Results revealed significantly slower rates of both face and object naming in the homogeneous context (i.e., semantic interference), with a stronger effect for face naming. Interestingly, the group of older adults with a lower educational level showed an increased semantic interference effect during face naming. These findings suggest the joint work of the two mechanisms proposed to explain age-related naming difficulties, i.e., the inhibitory deficit and the transmission deficit hypothesis. Therefore, the stronger vulnerability to semantic interference in the lower educated older adult sample would possibly point to a failure in the inhibitory mechanisms in charge of interference resolution, as proposed by the inhibitory deficit hypothesis. In addition, the fact that this interference effect was mainly restricted to face naming and not to object naming would be consistent with the increased age-related difficulties during proper name retrieval, as suggested by the transmission deficit hypothesis.
The modulating effect of education on semantic interference during healthy aging
Morales, Luis; Bajo, María Teresa
2018-01-01
Aging has traditionally been related to impairments in name retrieval. These impairments have usually been explained by a phonological transmission deficit hypothesis or by an inhibitory deficit hypothesis. This decline can, however, be modulated by the educational level of the sample. This study analyzed the possible role of these approaches in explaining both object and face naming impairments during aging. Older adults with low and high educational level and young adults with high educational level were asked to repeatedly name objects or famous people using the semantic-blocking paradigm. We compared naming when exemplars were presented in a semantically homogeneous or in a semantically heterogeneous context. Results revealed significantly slower rates of both face and object naming in the homogeneous context (i.e., semantic interference), with a stronger effect for face naming. Interestingly, the group of older adults with a lower educational level showed an increased semantic interference effect during face naming. These findings suggest the joint work of the two mechanisms proposed to explain age-related naming difficulties, i.e., the inhibitory deficit and the transmission deficit hypothesis. Therefore, the stronger vulnerability to semantic interference in the lower educated older adult sample would possibly point to a failure in the inhibitory mechanisms in charge of interference resolution, as proposed by the inhibitory deficit hypothesis. In addition, the fact that this interference effect was mainly restricted to face naming and not to object naming would be consistent with the increased age-related difficulties during proper name retrieval, as suggested by the transmission deficit hypothesis. PMID:29370252
Visual Grouping in Accordance With Utterance Planning Facilitates Speech Production.
Zhao, Liming; Paterson, Kevin B; Bai, Xuejun
2018-01-01
Research on language production has focused on the process of utterance planning and involved studying the synchronization between visual gaze and the production of sentences that refer to objects in the immediate visual environment. However, it remains unclear how the visual grouping of these objects might influence this process. To shed light on this issue, the present research examined the effects of the visual grouping of objects in a visual display on utterance planning in two experiments. Participants produced utterances of the form "The snail and the necklace are above/below/on the left/right side of the toothbrush" for objects containing these referents (e.g., a snail, a necklace and a toothbrush). These objects were grouped using classic Gestalt principles of color similarity (Experiment 1) and common region (Experiment 2) so that the induced perceptual grouping was congruent or incongruent with the required phrasal organization. The results showed that speech onset latencies were shorter in congruent than incongruent conditions. The findings therefore reveal that the congruency between the visual grouping of referents and the required phrasal organization can influence speech production. Such findings suggest that, when language is produced in a visual context, speakers make use of both visual and linguistic cues to plan utterances.
Techniques for Programming Visual Demonstrations.
ERIC Educational Resources Information Center
Gropper, George L.
Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
ERIC Educational Resources Information Center
Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.
2008-01-01
The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…
Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location
Kanwisher, Nancy
2012-01-01
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434
Crajé, Céline; Santello, Marco; Gordon, Andrew M
2013-01-01
Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.