Sample records for low-level visual processing

  1. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  3. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  4. The effect of visual representation style in problem-solving: a perspective from cognitive processes.

    PubMed

    Nyamsuren, Enkhbold; Taatgen, Niels A

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving.

  5. The Effect of Visual Representation Style in Problem-Solving: A Perspective from Cognitive Processes

    PubMed Central

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. PMID:24260415

  6. The levels of perceptual processing and the neural correlates of increasing subjective visibility.

    PubMed

    Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel

    2017-10-01

    According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  8. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  9. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  10. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  11. Sandwich masking eliminates both visual awareness of faces and face-specific brain activity through a feedforward mechanism.

    PubMed

    Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G

    2011-06-07

    It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.

  12. Dissociation between perceptual processing and priming in long-term lorazepam users.

    PubMed

    Giersch, Anne; Vidailhet, Pierre

    2006-12-01

    Acute effects of lorazepam on visual information processing, perceptual priming and explicit memory are well established. However, visual processing and perceptual priming have rarely been explored in long-term lorazepam users. By exploring these functions it was possible to test the hypothesis that difficulty in processing visual information may lead to deficiencies in perceptual priming. Using a simple blind procedure, we tested explicit memory, perceptual priming and visual perception in 15 long-term lorazepam users and 15 control subjects individually matched according to sex, age and education level. Explicit memory, perceptual priming, and the identification of fragmented pictures were found to be preserved in long-term lorazepam users, contrary to what is usually observed after an acute drug intake. The processing of visual contour, on the other hand, was still significantly impaired. These results suggest that the effects observed on low-level visual perception are independent of the acute deleterious effects of lorazepam on perceptual priming. A comparison of perceptual priming in subjects with low- vs. high-level identification of new fragmented pictures further suggests that the ability to identify fragmented pictures has no influence on priming. Despite the fact that they were treated with relatively low doses and far from peak plasma concentration, it is noteworthy that in long-term users memory was preserved.

  13. Visual temporal processing in dyslexia and the magnocellular deficit theory: the need for speed?

    PubMed

    McLean, Gregor M T; Stuart, Geoffrey W; Coltheart, Veronika; Castles, Anne

    2011-12-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore temporal aspects of magnocellular functioning in 40 children with dyslexia and 42 age-matched controls (aged 7-11). The relationship between magnocellular temporal resolution and higher-level aspects of visual temporal processing including inspection time, single and dual-target (attentional blink) RSVP performance, go/no-go reaction time, and rapid naming was also assessed. The Dyslexia group exhibited significant deficits in magnocellular temporal resolution compared with controls, but the two groups did not differ in parvocellular temporal resolution. Despite the significant group differences, associations between magnocellular temporal resolution and reading ability were relatively weak, and links between low-level temporal resolution and reading ability did not appear specific to the magnocellular system. Factor analyses revealed that a collective Perceptual Speed factor, involving both low-level and higher-level visual temporal processing measures, accounted for unique variance in reading ability independently of phonological processing, rapid naming, and general ability.

  14. Small numbers are sensed directly, high numbers constructed from size and density.

    PubMed

    Zimmermann, Eckart

    2018-04-01

    Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Attentional gain and processing capacity limits predict the propensity to neglect unexpected visual stimuli.

    PubMed

    Papera, Massimiliano; Richards, Anne

    2016-05-01

    Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.

  17. Modeling of pilot's visual behavior for low-level flight

    NASA Astrophysics Data System (ADS)

    Schulte, Axel; Onken, Reiner

    1995-06-01

    Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.

  18. Autism spectrum traits and visual processing in young adults with very low birth weight: the Helsinki Study of Very Low Birth Weight adults.

    PubMed

    Wolford, E; Pesonen, A-K; Heinonen, K; Lahti, M; Pyhälä, R; Lahti, J; Hovi, P; Strang-Karlsson, S; Eriksson, J G; Andersson, S; Järvenpää, A-L; Kajantie, E; Räikkönen, K

    2017-04-01

    Visual processing problems may be one underlying factor for cognitive impairments related to autism spectrum disorders (ASDs). We examined associations between ASD-traits (Autism-Spectrum Quotient) and visual processing performance (Rey-Osterrieth Complex Figure Test; Block Design task of the Wechsler Adult Intelligence Scale-III) in young adults (mean age=25.0, s.d.=2.1 years) born preterm at very low birth weight (VLBW; <1500 g) (n=101) or at term (n=104). A higher level of ASD-traits was associated with slower global visual processing speed among the preterm VLBW, but not among the term-born group (P<0.04 for interaction). Our findings suggest that the associations between ASD-traits and visual processing may be restricted to individuals born preterm, and related specifically to global, not local visual processing. Our findings point to cumulative social and neurocognitive problems in those born preterm at VLBW.

  19. Looking away from faces: influence of high-level visual processes on saccade programming.

    PubMed

    Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika

    2010-03-30

    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

  20. Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness

    PubMed Central

    Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.

    2017-01-01

    Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158

  1. Abnormal brain activation in neurofibromatosis type 1: a link between visual processing and the default mode network.

    PubMed

    Violante, Inês R; Ribeiro, Maria J; Cunha, Gil; Bernardino, Inês; Duarte, João V; Ramos, Fabiana; Saraiva, Jorge; Silva, Eduardo; Castelo-Branco, Miguel

    2012-01-01

    Neurofibromatosis type 1 (NF1) is one of the most common single gene disorders affecting the human nervous system with a high incidence of cognitive deficits, particularly visuospatial. Nevertheless, neurophysiological alterations in low-level visual processing that could be relevant to explain the cognitive phenotype are poorly understood. Here we used functional magnetic resonance imaging (fMRI) to study early cortical visual pathways in children and adults with NF1. We employed two distinct stimulus types differing in contrast and spatial and temporal frequencies to evoke relatively different activation of the magnocellular (M) and parvocellular (P) pathways. Hemodynamic responses were investigated in retinotopically-defined regions V1, V2 and V3 and then over the acquired cortical volume. Relative to matched control subjects, patients with NF1 showed deficient activation of the low-level visual cortex to both stimulus types. Importantly, this finding was observed for children and adults with NF1, indicating that low-level visual processing deficits do not ameliorate with age. Moreover, only during M-biased stimulation patients with NF1 failed to deactivate or even activated anterior and posterior midline regions of the default mode network. The observation that the magnocellular visual pathway is impaired in NF1 in early visual processing and is specifically associated with a deficient deactivation of the default mode network may provide a neural explanation for high-order cognitive deficits present in NF1, particularly visuospatial and attentional. A link between magnocellular and default mode network processing may generalize to neuropsychiatric disorders where such deficits have been separately identified.

  2. The Efficacy of a Low-Level Program Visualization Tool for Teaching Programming Concepts to Novice C Programmers.

    ERIC Educational Resources Information Center

    Smith, Philip A.; Webb, Geoffrey I.

    2000-01-01

    Describes "Glass-box Interpreter" a low-level program visualization tool called Bradman designed to provide a conceptual model of C program execution for novice programmers and makes visible aspects of the programming process normally hidden from the user. Presents an experiment that tests the efficacy of Bradman, and provides…

  3. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Eagle-eyed visual acuity: an experimental investigation of enhanced perception in autism.

    PubMed

    Ashwin, Emma; Ashwin, Chris; Rhydderch, Danielle; Howells, Jessica; Baron-Cohen, Simon

    2009-01-01

    Anecdotal accounts of sensory hypersensitivity in individuals with autism spectrum conditions (ASC) have been noted since the first reports of the condition. Over time, empirical evidence has supported the notion that those with ASC have superior visual abilities compared with control subjects. However, it remains unclear whether these abilities are specifically the result of differences in sensory thresholds (low-level processing), rather than higher-level cognitive processes. This study investigates visual threshold in n = 15 individuals with ASC and n = 15 individuals without ASC, using a standardized optometric test, the Freiburg Visual Acuity and Contrast Test, to investigate basic low-level visual acuity. Individuals with ASC have significantly better visual acuity (20:7) compared with control subjects (20:13)-acuity so superior that it lies in the region reported for birds of prey. The results of this study suggest that inclusion of sensory hypersensitivity in the diagnostic criteria for ASC may be warranted and that basic standardized tests of sensory thresholds may inform causal theories of ASC.

  5. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  6. Low-level information and high-level perception: the case of speech in noise.

    PubMed

    Nahum, Mor; Nelken, Israel; Ahissar, Merav

    2008-05-20

    Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.

  7. An fMRI-study of locally oriented perception in autism: altered early visual processing of the block design test.

    PubMed

    Bölte, S; Hubl, D; Dierks, T; Holtmann, M; Poustka, F

    2008-01-01

    Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.

  8. Acquiring skill at medical image inspection: learning localized in early visual processes

    NASA Astrophysics Data System (ADS)

    Sowden, Paul T.; Davies, Ian R. L.; Roling, Penny; Watt, Simon J.

    1997-04-01

    Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.

  9. Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield

    PubMed Central

    Gayet, Surya; Van der Stigchel, Stefan; Paffen, Chris L. E.

    2014-01-01

    Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis. PMID:24904476

  10. Using a familiar risk comparison within a risk ladder to improve risk understanding by low numerates: a study of visual attention.

    PubMed

    Keller, Carmen

    2011-07-01

    Previous experimental research provides evidence that a familiar risk comparison within a risk ladder is understood by low- and high-numerate individuals. It especially helps low numerates to better evaluate risk. In the present study, an eye tracker was used to capture individuals' visual attention to a familiar risk comparison, such as the risk associated with smoking. Two parameters of information processing-efficiency and level-were derived from visual attention. A random sample of participants from the general population (N= 68) interpreted a given risk level with the help of the risk ladder. Numeracy was negatively correlated with overall visual attention on the risk ladder (r(s) =-0.28, p= 0.01), indicating that the lower the numeracy, the more the time spent looking at the whole risk ladder. Numeracy was positively correlated with the efficiency of processing relevant frequency (r(s) = 0.34, p < 0.001) and relevant textual information (r(s) = 0.34, p < 0.001), but not with the efficiency of processing relevant comparative information and numerical information. There was a significant negative correlation between numeracy and the level of processing of relevant comparative risk information (r(s) =-0.21, p < 0.01), indicating that low numerates processed the comparative risk information more deeply than the high numerates. There was no correlation between numeracy and perceived risk. These results add to previous experimental research, indicating that the smoking risk comparison was crucial for low numerates to evaluate and understand risk. Furthermore, the eye-tracker method is promising for studying information processing and improving risk communication formats. © 2011 Society for Risk Analysis.

  11. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    PubMed Central

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  12. When art moves the eyes: a behavioral and eye-tracking study.

    PubMed

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.

  13. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    ERIC Educational Resources Information Center

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…

  14. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    PubMed

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  15. Identity Development in German Adolescents with and without Visual Impairments

    ERIC Educational Resources Information Center

    Pinquart, Martin; Pfeiffer, Jens P.

    2013-01-01

    Introduction: The study reported here assessed the exploration of identity and commitment to an identity in German adolescents with and without visual impairments. Methods: In total, 178 adolescents with visual impairments (blindness or low vision) and 526 sighted adolescents completed the Ego Identity Process Questionnaire. Results: The levels of…

  16. Visual search deficits in amblyopia.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  17. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  18. Theoretical approaches to lightness and perception.

    PubMed

    Gilchrist, Alan

    2015-01-01

    Theories of lightness, like theories of perception in general, can be categorized as high-level, low-level, and mid-level. However, I will argue that in practice there are only two categories: one-stage mid-level theories, and two-stage low-high theories. Low-level theories usually include a high-level component and high-level theories include a low-level component, the distinction being mainly one of emphasis. Two-stage theories are the modern incarnation of the persistent sensation/perception dichotomy according to which an early experience of raw sensations, faithful to the proximal stimulus, is followed by a process of cognitive interpretation, typically based on past experience. Like phlogiston or the ether, raw sensations seem like they must exist, but there is no clear evidence for them. Proximal stimulus matches are postperceptual, not read off an early sensory stage. Visual angle matches are achieved by a cognitive process of flattening the visual world. Likewise, brightness (luminance) matches depend on a cognitive process of flattening the illumination. Brightness is not the input to lightness; brightness is slower than lightness. Evidence for an early (< 200 ms) mosaic stage is shaky. As for cognitive influences on perception, the many claims tend to fall apart upon close inspection of the evidence. Much of the evidence for the current revival of the 'new look' is probably better explained by (1) a natural desire of (some) subjects to please the experimenter, and (2) the ease of intuiting an experimental hypothesis. High-level theories of lightness are overkill. The visual system does not need to know the amount of illumination, merely which surfaces share the same illumination. This leaves mid-level theories derived from the gestalt school. Here the debate seems to revolve around layer models and framework models. Layer models fit our visual experience of a pattern of illumination projected onto a pattern of reflectance, while framework models provide a better account of illusions and failures of constancy. Evidence for and against these approaches is reviewed.

  19. Electrophysiological evidence for biased competition in V1 for fear expressions.

    PubMed

    West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay

    2011-11-01

    When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.

  20. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  1. Specific problems in visual cognition of dyslexic readers: Face discrimination deficits predict dyslexia over and above discrimination of scrambled faces and novel objects.

    PubMed

    Sigurdardottir, Heida Maria; Fridriksdottir, Liv Elisabet; Gudjonsdottir, Sigridur; Kristjánsson, Árni

    2018-06-01

    Evidence of interdependencies of face and word processing mechanisms suggest possible links between reading problems and abnormal face processing. In two experiments we assessed such high-level visual deficits in people with a history of reading problems. Experiment 1 showed that people who were worse at face matching had greater reading problems. In experiment 2, matched dyslexic and typical readers were tested, and difficulties with face matching were consistently found to predict dyslexia over and above both novel-object matching as well as matching noise patterns that shared low-level visual properties with faces. Furthermore, ADHD measures could not account for face matching problems. We speculate that reading difficulties in dyslexia are partially caused by specific deficits in high-level visual processing, in particular for visual object categories such as faces and words with which people have extensive experience. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Fear improves mental rotation of low-spatial-frequency visual representation.

    PubMed

    Borst, Grégoire

    2013-10-01

    Previous studies have demonstrated that the brief presentation of a fearful face improves not only low-level visual processing such as contrast and orientation sensitivity but also improves visuospatial processing. In the present study, we investigated whether fear improves mental rotation efficiency (i.e., the mental rotation rate) because of the effect of fear on the sensitivity of magnocellular neurons. We asked 2 groups of participants to perform a mental rotation task with either low-pass or high-pass filtered 3-dimensional objects. Following the presentation of a fearful face, participants mentally rotated objects faster compared with when a neutral face was presented but only for low-pass filtered objects. The results suggest that fear improves mental rotation efficiency by increasing sensitivity to motion-related visual information within the magnocellular pathway.

  3. Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning.

    PubMed

    De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T

    2012-02-08

    Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.

  4. Temporal resolution of orientation-defined texture segregation: a VEP study.

    PubMed

    Lachapelle, Julie; McKerral, Michelle; Jauffret, Colin; Bach, Michael

    2008-09-01

    Orientation is one of the visual dimensions that subserve figure-ground discrimination. A spatial gradient in orientation leads to "texture segregation", which is thought to be concurrent parallel processing across the visual field, without scanning. In the visual-evoked potential (VEP) a component can be isolated which is related to texture segregation ("tsVEP"). Our objective was to evaluate the temporal frequency dependence of the tsVEP to compare processing speed of low-level features (e.g., orientation, using the VEP, here denoted llVEP) with texture segregation because of a recent literature controversy in that regard. Visual-evoked potentials (VEPs) were recorded in seven normal adults. Oriented line segments of 0.1 degrees x 0.8 degrees at 100% contrast were presented in four different arrangements: either oriented in parallel for two homogeneous stimuli (from which were obtained the low-level VEP (llVEP)) or with a 90 degrees orientation gradient for two textured ones (from which were obtained the texture VEP). The orientation texture condition was presented at eight different temporal frequencies ranging from 7.5 to 45 Hz. Fourier analysis was used to isolate low-level components at the pattern-change frequency and texture-segregation components at half that frequency. For all subjects, there was lower high-cutoff frequency for tsVEP than for llVEPs, on average 12 Hz vs. 17 Hz (P = 0.017). The results suggest that the processing of feature gradients to extract texture segregation requires additional processing time, resulting in a lower fusion frequency.

  5. Optimizing text for an individual's visual system: The contribution of visual crowding to reading difficulties.

    PubMed

    Joo, Sung Jun; White, Alex L; Strodtman, Douglas J; Yeatman, Jason D

    2018-06-01

    Reading is a complex process that involves low-level visual processing, phonological processing, and higher-level semantic processing. Given that skilled reading requires integrating information among these different systems, it is likely that reading difficulty-known as dyslexia-can emerge from impairments at any stage of the reading circuitry. To understand contributing factors to reading difficulties within individuals, it is necessary to diagnose the function of each component of the reading circuitry. Here, we investigated whether adults with dyslexia who have impairments in visual processing respond to a visual manipulation specifically targeting their impairment. We collected psychophysical measures of visual crowding and tested how each individual's reading performance was affected by increased text-spacing, a manipulation designed to alleviate severe crowding. Critically, we identified a sub-group of individuals with dyslexia showing elevated crowding and found that these individuals read faster when text was rendered with increased letter-, word- and line-spacing. Our findings point to a subtype of dyslexia involving elevated crowding and demonstrate that individuals benefit from interventions personalized to their specific impairments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  7. Higher levels of depression are associated with reduced global bias in visual processing.

    PubMed

    de Fockert, Jan W; Cooper, Andrew

    2014-04-01

    Negative moods have been associated with a tendency to prioritise local details in visual processing. The current study investigated the relation between depression and visual processing using the Navon task, a standard task of local and global processing. In the Navon task, global stimuli are presented that are made up of many local parts, and the participants are instructed to report the identity of either a global or a local target shape. Participants with a low self-reported level of depression showed evidence of the expected global processing bias, and were significantly faster at responding to the global, compared with the local level. By contrast, no such difference was observed in participants with high levels of depression. The reduction of the global bias associated with high levels of depression was only observed in the overall speed of responses to global (versus local) targets, and not in the level of interference produced by the global (versus local) distractors. These results are in line with recent findings of a dissociation between local/global processing bias and interference from local/global distractors, and support the claim that depression is associated with a reduction in the tendency to prioritise global-level processing.

  8. The effect of integration masking on visual processing in perceptual categorization.

    PubMed

    Hélie, Sébastien

    2017-08-01

    Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2013-09-01

    right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant

  10. Nonretinotopic visual processing in the brain.

    PubMed

    Melcher, David; Morrone, Maria Concetta

    2015-01-01

    A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.

  11. Visual representation of spatiotemporal structure

    NASA Astrophysics Data System (ADS)

    Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.

    1998-07-01

    The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.

  12. The contribution of foveal and peripheral visual information to ensemble representation of face race.

    PubMed

    Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M

    2017-11-01

    The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.

  13. Bioplausible multiscale filtering in retino-cortical processing as a mechanism in perceptual grouping.

    PubMed

    Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W

    2017-12-01

    Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.

  14. A digital retina-like low-level vision processor.

    PubMed

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  15. Visual information processing of faces in body dysmorphic disorder.

    PubMed

    Feusner, Jamie D; Townsend, Jennifer; Bystritsky, Alexander; Bookheimer, Susan

    2007-12-01

    Body dysmorphic disorder (BDD) is a severe psychiatric condition in which individuals are preoccupied with perceived appearance defects. Clinical observation suggests that patients with BDD focus on details of their appearance at the expense of configural elements. This study examines abnormalities in visual information processing in BDD that may underlie clinical symptoms. To determine whether patients with BDD have abnormal patterns of brain activation when visually processing others' faces with high, low, or normal spatial frequency information. Case-control study. University hospital. Twelve right-handed, medication-free subjects with BDD and 13 control subjects matched by age, sex, and educational achievement. Intervention Functional magnetic resonance imaging while performing matching tasks of face stimuli. Stimuli were neutral-expression photographs of others' faces that were unaltered, altered to include only high spatial frequency visual information, or altered to include only low spatial frequency visual information. Blood oxygen level-dependent functional magnetic resonance imaging signal changes in the BDD and control groups during tasks with each stimulus type. Subjects with BDD showed greater left hemisphere activity relative to controls, particularly in lateral prefrontal cortex and lateral temporal lobe regions for all face tasks (and dorsal anterior cingulate activity for the low spatial frequency task). Controls recruited left-sided prefrontal and dorsal anterior cingulate activity only for the high spatial frequency task. Subjects with BDD demonstrate fundamental differences from controls in visually processing others' faces. The predominance of left-sided activity for low spatial frequency and normal faces suggests detail encoding and analysis rather than holistic processing, a pattern evident in controls only for high spatial frequency faces. These abnormalities may be associated with apparent perceptual distortions in patients with BDD. The fact that these findings occurred while subjects viewed others' faces suggests differences in visual processing beyond distortions of their own appearance.

  16. A pool of pairs of related objects (POPORO) for investigating visual semantic integration: behavioral and electrophysiological validation.

    PubMed

    Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A

    2012-07-01

    Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.

  17. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  18. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  19. Normal Visual Acuity and Electrophysiological Contrast Gain in Adults with High-Functioning Autism Spectrum Disorder.

    PubMed

    Tebartz van Elst, Ludger; Bach, Michael; Blessing, Julia; Riedel, Andreas; Bubl, Emanuel

    2015-01-01

    A common neurodevelopmental disorder, autism spectrum disorder (ASD), is defined by specific patterns in social perception, social competence, communication, highly circumscribed interests, and a strong subjective need for behavioral routines. Furthermore, distinctive features of visual perception, such as markedly reduced eye contact and a tendency to focus more on small, visual items than on holistic perception, have long been recognized as typical ASD characteristics. Recent debate in the scientific community discusses whether the physiology of low-level visual perception might explain such higher visual abnormalities. While reports of this enhanced, "eagle-like" visual acuity contained methodological errors and could not be substantiated, several authors have reported alterations in even earlier stages of visual processing, such as contrast perception and motion perception at the occipital cortex level. Therefore, in this project, we have investigated the electrophysiology of very early visual processing by analyzing the pattern electroretinogram-based contrast gain, the background noise amplitude, and the psychophysical visual acuities of participants with high-functioning ASD and controls with equal education. Based on earlier findings, we hypothesized that alterations in early vision would be present in ASD participants. This study included 33 individuals with ASD (11 female) and 33 control individuals (12 female). The groups were matched in terms of age, gender, and education level. We found no evidence of altered electrophysiological retinal contrast processing or psychophysical measured visual acuities. There appears to be no evidence for abnormalities in retinal visual processing in ASD patients, at least with respect to contrast detection.

  20. Prolonged fasting impairs neural reactivity to visual stimulation.

    PubMed

    Kohn, N; Wassenberg, A; Toygar, T; Kellermann, T; Weidenfeld, C; Berthold-Losleben, M; Chechko, N; Orfanos, S; Vocke, S; Laoutidis, Z G; Schneider, F; Karges, W; Habel, U

    2016-01-01

    Previous literature has shown that hypoglycemia influences the intensity of the BOLD signal. A similar but smaller effect may also be elicited by low normal blood glucose levels in healthy individuals. This may not only confound the BOLD signal measured in fMRI, but also more generally interact with cognitive processing, and thus indirectly influence fMRI results. Here we show in a placebo-controlled, crossover, double-blind study on 40 healthy subjects, that overnight fasting and low normal levels of glucose contrasted to an activated, elevated glucose condition have an impact on brain activation during basal visual stimulation. Additionally, functional connectivity of the visual cortex shows a strengthened association with higher-order attention-related brain areas in an elevated blood glucose condition compared to the fasting condition. In a fasting state visual brain areas show stronger coupling to the inferior temporal gyrus. Results demonstrate that prolonged overnight fasting leads to a diminished BOLD signal in higher-order occipital processing areas when compared to an elevated blood glucose condition. Additionally, functional connectivity patterns underscore the modulatory influence of fasting on visual brain networks. Patterns of brain activation and functional connectivity associated with a broad range of attentional processes are affected by maturation and aging and associated with psychiatric disease and intoxication. Thus, we conclude that prolonged fasting may decrease fMRI design sensitivity in any task involving attentional processes when fasting status or blood glucose is not controlled.

  1. Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception

    ERIC Educational Resources Information Center

    Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…

  2. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  3. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.

  4. Residual effects of ecstasy (3,4-methylenedioxymethamphetamine) on low level visual processes.

    PubMed

    Murray, Elizabeth; Bruno, Raimondo; Brown, John

    2012-03-01

    'Ecstasy' (3,4-methylenedioxymethamphetamine) induces impaired functioning in the serotonergic system, including the occipital lobe. This study employed the 'tilt aftereffect' paradigm to operationalise the function of orientation-selective neurons among ecstasy consumers and controls as a means of investigating the role of reduced serotonin on visual orientation processing. The magnitude of the tilt aftereffect reflects the extent of lateral inhibition between orientation-selective neurons and is elicited to both 'real' contours, processed in visual cortex area V1, and illusory contours, processed in V2. The magnitude of tilt aftereffect to both contour types was examined among 19 ecstasy users (6 ecstasy only; 13 ecstasy-plus-cannabis users) and 23 matched controls (9 cannabis-only users; 14 drug-naive). Ecstasy users had a significantly greater tilt magnitude than non-users for real contours (Hedge's g = 0.63) but not for illusory contours (g = 0.20). These findings provide support for literature suggesting that residual effects of ecstasy (and reduced serotonin) impairs lateral inhibition between orientation-selective neurons in V1, which however suggests that ecstasy may not substantially affect this process in V2. Multiple studies have now demonstrated ecstasy-related deficits on basic visual functions, including orientation and motion processing. Such low-level effects may contribute to the impact of ecstasy use on neuropsychological tests of visuospatial function. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  6. Lateralized hybrid faces: evidence of a valence-specific bias in the processing of implicit emotions.

    PubMed

    Prete, Giulia; Laeng, Bruno; Tommasi, Luca

    2014-01-01

    It is well known that hemispheric asymmetries exist for both the analyses of low-level visual information (such as spatial frequency) and high-level visual information (such as emotional expressions). In this study, we assessed which of the above factors underlies perceptual laterality effects with "hybrid faces": a type of stimulus that allows testing for unaware processing of emotional expressions, when the emotion is displayed in the low-frequency information while an image of the same face with a neutral expression is superimposed to it. Despite hybrid faces being perceived as neutral, the emotional information modulates observers' social judgements. In the present study, participants were asked to assess friendliness of hybrid faces displayed tachistoscopically, either centrally or laterally to fixation. We found a clear influence of the hidden emotions also with lateral presentations. Happy faces were rated as more friendly and angry faces as less friendly with respect to neutral faces. In general, hybrid faces were evaluated as less friendly when they were presented in the left visual field/right hemisphere than in the right visual field/left hemisphere. The results extend the validity of the valence hypothesis in the specific domain of unaware (subcortical) emotion processing.

  7. Altered activity of the primary visual area during gaze processing in individuals with high-functioning autistic spectrum disorder: a magnetoencephalography study.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki

    2013-01-01

    Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.

  8. Visual information underpinning skilled anticipation: The effect of blur on a coupled and uncoupled in situ anticipatory response.

    PubMed

    Mann, David L; Abernethy, Bruce; Farrow, Damian

    2010-07-01

    Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.

  9. Internal curvature signal and noise in low- and high-level vision

    PubMed Central

    Grabowecky, Marcia; Kim, Yee Joon; Suzuki, Satoru

    2011-01-01

    How does internal processing contribute to visual pattern perception? By modeling visual search performance, we estimated internal signal and noise relevant to perception of curvature, a basic feature important for encoding of three-dimensional surfaces and objects. We used isolated, sparse, crowded, and face contexts to determine how internal curvature signal and noise depended on image crowding, lateral feature interactions, and level of pattern processing. Observers reported the curvature of a briefly flashed segment, which was presented alone (without lateral interaction) or among multiple straight segments (with lateral interaction). Each segment was presented with no context (engaging low-to-intermediate-level curvature processing), embedded within a face context as the mouth (engaging high-level face processing), or embedded within an inverted-scrambled-face context as a control for crowding. Using a simple, biologically plausible model of curvature perception, we estimated internal curvature signal and noise as the mean and standard deviation, respectively, of the Gaussian-distributed population activity of local curvature-tuned channels that best simulated behavioral curvature responses. Internal noise was increased by crowding but not by face context (irrespective of lateral interactions), suggesting prevention of noise accumulation in high-level pattern processing. In contrast, internal curvature signal was unaffected by crowding but modulated by lateral interactions. Lateral interactions (with straight segments) increased curvature signal when no contextual elements were added, but equivalent interactions reduced curvature signal when each segment was presented within a face. These opposing effects of lateral interactions are consistent with the phenomena of local-feature contrast in low-level processing and global-feature averaging in high-level processing. PMID:21209356

  10. Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender

    PubMed Central

    Rozenkrants, Bella; Polich, John

    2008-01-01

    Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987

  11. Low-level visual attention and its relation to joint attention in autism spectrum disorder.

    PubMed

    Jaworski, Jessica L Bean; Eigsti, Inge-Marie

    2017-04-01

    Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.

  12. Differential roles of low and high spatial frequency content in abnormal facial emotion perception in schizophrenia.

    PubMed

    McBain, Ryan; Norton, Daniel; Chen, Yue

    2010-09-01

    While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.

  13. Visual Temporal Processing in Dyslexia and the Magnocellular Deficit Theory: The Need for Speed?

    ERIC Educational Resources Information Center

    McLean, Gregor M. T.; Stuart, Geoffrey W.; Coltheart, Veronika; Castles, Anne

    2011-01-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore "temporal" aspects of magnocellular functioning…

  14. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. In vivo imaging of an inducible oncogenic tumor antigen visualizes tumor progression and predicts CTL tolerance.

    PubMed

    Buschow, Christian; Charo, Jehad; Anders, Kathleen; Loddenkemper, Christoph; Jukica, Ana; Alsamah, Wisam; Perez, Cynthia; Willimsky, Gerald; Blankenstein, Thomas

    2010-03-15

    Visualizing oncogene/tumor Ag expression by noninvasive imaging is of great interest for understanding processes of tumor development and therapy. We established transgenic (Tg) mice conditionally expressing a fusion protein of the SV40 large T Ag and luciferase (TagLuc) that allows monitoring of oncogene/tumor Ag expression by bioluminescent imaging upon Cre recombinase-mediated activation. Independent of Cre-mediated recombination, the TagLuc gene was expressed at low levels in different tissues, probably due to the leakiness of the stop cassette. The level of spontaneous TagLuc expression, detected by bioluminescent imaging, varied between the different Tg lines, depended on the nature of the Tg expression cassette, and correlated with Tag-specific CTL tolerance. Following liver-specific Cre-loxP site-mediated excision of the stop cassette that separated the promoter from the TagLuc fusion gene, hepatocellular carcinoma development was visualized. The ubiquitous low level TagLuc expression caused the failure of transferred effector T cells to reject Tag-expressing tumors rather than causing graft-versus-host disease. This model may be useful to study different levels of tolerance, monitor tumor development at an early stage, and rapidly visualize the efficacy of therapeutic intervention versus potential side effects of low-level Ag expression in normal tissues.

  16. Familiarity enhances visual working memory for faces.

    PubMed

    Jackson, Margaret C; Raymond, Jane E

    2008-06-01

    Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.

  17. Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.

    PubMed

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.

  18. Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.

    PubMed

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2014-04-01

    Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  20. MEMS-based system and image processing strategy for epiretinal prosthesis.

    PubMed

    Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong

    2015-01-01

    Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.

  1. A Neural Substrate for Atypical Low-Level Visual Processing in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Vandenbroucke, Myriam W. G.; Scholte, H. Steven; van Engeland, Herman; Lamme, Victor A. F.; Kemner, Chantal

    2008-01-01

    An important characteristic of autism spectrum disorder (ASD) is increased visual detail perception. Yet, there is no standing neurobiological explanation for this aspect of the disorder. We show evidence from EEG data, from 31 control subjects (three females) and 13 subjects (two females) aged 16-28 years, for a specific impairment in object…

  2. Progress in high-level exploratory vision

    NASA Astrophysics Data System (ADS)

    Brand, Matthew

    1993-08-01

    We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.

  3. Abnormalities of Object Visual Processing in Body Dysmorphic Disorder

    PubMed Central

    Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.

    2013-01-01

    Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897

  4. Low-level awareness accompanies "unconscious" high-level processing during continuous flash suppression.

    PubMed

    Gelbard-Sagiv, Hagar; Faivre, Nathan; Mudrik, Liad; Koch, Christof

    2016-01-01

    The scope and limits of unconscious processing are a matter of ongoing debate. Lately, continuous flash suppression (CFS), a technique for suppressing visual stimuli, has been widely used to demonstrate surprisingly high-level processing of invisible stimuli. Yet, recent studies showed that CFS might actually allow low-level features of the stimulus to escape suppression and be consciously perceived. The influence of such low-level awareness on high-level processing might easily go unnoticed, as studies usually only probe the visibility of the feature of interest, and not that of lower-level features. For instance, face identity is held to be processed unconsciously since subjects who fail to judge the identity of suppressed faces still show identity priming effects. Here we challenge these results, showing that such high-level priming effects are indeed induced by faces whose identity is invisible, but critically, only when a lower-level feature, such as color or location, is visible. No evidence for identity processing was found when subjects had no conscious access to any feature of the suppressed face. These results suggest that high-level processing of an image might be enabled by-or co-occur with-conscious access to some of its low-level features, even when these features are not relevant to the processed dimension. Accordingly, they call for further investigation of lower-level awareness during CFS, and reevaluation of other unconscious high-level processing findings.

  5. Stepwise emergence of the face-sensitive N170 event-related potential component.

    PubMed

    Jemel, Boutheina; Schuller, Anne-Marie; Cheref-Khan, Yasémine; Goffaux, Valérie; Crommelinck, Marc; Bruyer, Raymond

    2003-11-14

    The present study used a parametric design to characterize early event-related potentials (ERP) to face stimuli embedded in gradually decreasing random noise levels. For both N170 and the vertex positive potential (VPP) there was a linear increase in amplitude and decrease in latency with decreasing levels of noise. In contrast, the earlier visual P1 component was stable across noise levels. The P1/N170 dissociation suggests not only a functional dissociation between low and high-level visual processing of faces but also that the N170 reflects the integration of sensorial information into a unitary representation. In addition, the N170/VPP association supports the view that they reflect the same processes operating when viewing faces.

  6. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  7. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-03-01

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Limits on perceptual encoding can be predicted from known receptive field properties of human visual cortex.

    PubMed

    Cohen, Michael A; Rhee, Juliana Y; Alvarez, George A

    2016-01-01

    Human cognition has a limited capacity that is often attributed to the brain having finite cognitive resources, but the nature of these resources is usually not specified. Here, we show evidence that perceptual interference between items can be predicted by known receptive field properties of the visual cortex, suggesting that competition within representational maps is an important source of the capacity limitations of visual processing. Across the visual hierarchy, receptive fields get larger and represent more complex, high-level features. Thus, when presented simultaneously, high-level items (e.g., faces) will often land within the same receptive fields, while low-level items (e.g., color patches) will often not. Using a perceptual task, we found long-range interference between high-level items, but only short-range interference for low-level items, with both types of interference being weaker across hemifields. Finally, we show that long-range interference between items appears to occur primarily during perceptual encoding and not during working memory maintenance. These results are naturally explained by the distribution of receptive fields and establish a link between perceptual capacity limits and the underlying neural architecture. (c) 2015 APA, all rights reserved).

  9. The Lingering Effects of an Artificial Blind Spot

    PubMed Central

    Morgan, Michael J.; McEwan, William; Solomon, Joshua

    2007-01-01

    Background When steady fixation is maintained on the centre of a large patch of texture, holes in the periphery of the texture rapidly fade from awareness, producing artificial scotomata (i.e., invisible areas of reduced vision, like the natural ‘blind spot’). There has been considerable controversy about whether this apparent ‘filling in’ depends on a low-level or high-level visual process. Evidence for an active process is that when the texture around the scotomata is suddenly removed, phantasms of the texture appear within the previous scotomata. Methodology To see if these phantasms were equivalent to real low-level signals, we measured contrast discrimination for real dynamic texture patches presented on top of the phantasms. Principal Findings Phantasm intensity varied with adapting contrast. Contrast discrimination depended on both (real) pedestal contrast and phantasm intensity, in a manner indicative of a common sensory threshold. The phantasms showed inter-ocular transfer, proving that their effects are cortical rather than retinal. Conclusions We show that this effect is consistent with a tonic spreading of the adapting texture into the scotomata, coupled with some overall loss of sensitivity. Our results support the view that ‘filling in’ happens at an early stage of visual processing, quite possibly in primary visual cortex (V1). PMID:17327917

  10. Visual Field Map Clusters in High-Order Visual Processing: Organization of V3A/V3B and a New Cloverleaf Cluster in the Posterior Superior Temporal Sulcus

    PubMed Central

    Barton, Brian; Brewer, Alyssa A.

    2017-01-01

    The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182

  11. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  12. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  13. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  14. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  15. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology

    PubMed Central

    Maekawa, Toru; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual’s emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual’s perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings. PMID:29664952

  16. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology.

    PubMed

    Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.

  17. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  18. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  19. Virtually simulated social pressure influences early visual processing more in low compared to high autonomous participants.

    PubMed

    Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried

    2014-02-01

    In a previous study, we showed that virtually simulated social group pressure could influence early stages of perception after only 100  ms. In the present EEG study, we investigated the influence of social pressure on visual perception in participants with high (HA) and low (LA) levels of autonomy. Ten HA and ten LA individuals were asked to accomplish a visual discrimination task in an adapted paradigm of Solomon Asch. Results indicate that LA participants adapted to the incorrect group opinion more often than HA participants (42% vs. 30% of the trials, respectively). LA participants showed a larger posterior P1 component contralateral to targets presented in the right visual field when conforming to the correct compared to conforming to the incorrect group decision. In conclusion, our ERP data suggest that the group context can have early effects on our perception rather than on conscious decision processes in LA, but not HA participants. Copyright © 2013 Society for Psychophysiological Research.

  20. Visual motherese? Signal-to-noise ratios in toddler-directed television

    PubMed Central

    Wass, Sam V; Smith, Tim J

    2015-01-01

    Younger brains are noisier information processing systems; this means that information for younger individuals has to allow clearer differentiation between those aspects that are required for the processing task in hand (the ‘signal’) and those that are not (the ‘noise’). We compared toddler-directed and adult-directed TV programmes (TotTV/ATV). We examined how low-level visual features (that previous research has suggested influence gaze allocation) relate to semantic information, namely the location of the character speaking in each frame. We show that this relationship differs between TotTV and ATV. First, we conducted Receiver Operator Characteristics analyses and found that feature congestion predicted speaking character location in TotTV but not ATV. Second, we used multiple analytical strategies to show that luminance differentials (flicker) predict face location more strongly in TotTV than ATV. Our results suggest that TotTV designers have intuited techniques for controlling toddler attention using low-level visual cues. The implications of these findings for structuring childhood learning experiences away from a screen are discussed. PMID:24702791

  1. Visual motherese? Signal-to-noise ratios in toddler-directed television.

    PubMed

    Wass, Sam V; Smith, Tim J

    2015-01-01

    Younger brains are noisier information processing systems; this means that information for younger individuals has to allow clearer differentiation between those aspects that are required for the processing task in hand (the 'signal') and those that are not (the 'noise'). We compared toddler-directed and adult-directed TV programmes (TotTV/ATV). We examined how low-level visual features (that previous research has suggested influence gaze allocation) relate to semantic information, namely the location of the character speaking in each frame. We show that this relationship differs between TotTV and ATV. First, we conducted Receiver Operator Characteristics analyses and found that feature congestion predicted speaking character location in TotTV but not ATV. Second, we used multiple analytical strategies to show that luminance differentials (flicker) predict face location more strongly in TotTV than ATV. Our results suggest that TotTV designers have intuited techniques for controlling toddler attention using low-level visual cues. The implications of these findings for structuring childhood learning experiences away from a screen are discussed. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  2. The effect of gender and level of vision on the physical activity level of children and adolescents with visual impairment.

    PubMed

    Aslan, Ummuhan Bas; Calik, Bilge Basakcı; Kitiş, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between 8 and 16 years participated in the study. The physical activity level of cases was evaluated with a physical activity diary (PAD) and one-mile run/walk test (OMR-WT). No difference was found between the PAD and the OMR-WT results of low vision and blind children and adolescents. The visually impaired children and adolescents were detected not to participate in vigorous physical activity. A difference was found in favor of low vision boys in terms of mild, moderate activities and OMR-WT durations. However, no difference was found between physical activity levels of blind girls and boys. The results of our study suggested that the physical activity level of visually impaired children and adolescents was low, and gender affected physical activity in low vision children and adolescents. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Dissociation of sensitivity to spatial frequency in word and face preferential areas of the fusiform gyrus.

    PubMed

    Woodhead, Zoe Victoria Joan; Wise, Richard James Surtees; Sereno, Marty; Leech, Robert

    2011-10-01

    Different cortical regions within the ventral occipitotemporal junction have been reported to show preferential responses to particular objects. Thus, it is argued that there is evidence for a left-lateralized visual word form area and a right-lateralized fusiform face area, but the unique specialization of these areas remains controversial. Words are characterized by greater power in the high spatial frequency (SF) range, whereas faces comprise a broader range of high and low frequencies. We investigated how these high-order visual association areas respond to simple sine-wave gratings that varied in SF. Using functional magnetic resonance imaging, we demonstrated lateralization of activity that was concordant with the low-level visual property of words and faces; left occipitotemporal cortex is more strongly activated by high than by low SF gratings, whereas the right occipitotemporal cortex responded more to low than high spatial frequencies. Therefore, the SF of a visual stimulus may bias the lateralization of processing irrespective of its higher order properties.

  4. Emergence of transformation-tolerant representations of visual objects in rat lateral extrastriate cortex

    PubMed Central

    Tafazoli, Sina; Safaai, Houman; De Franceschi, Gioia; Rosselli, Federica Bianca; Vanzella, Walter; Riggi, Margherita; Buffolo, Federica; Panzeri, Stefano; Zoccolan, Davide

    2017-01-01

    Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI: http://dx.doi.org/10.7554/eLife.22794.001 PMID:28395730

  5. A ganglion-cell-based primary image representation method and its contribution to object recognition

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song

    2016-10-01

    A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.

  6. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. Working memory encoding delays top-down attention to visual cortex.

    PubMed

    Scalf, Paige E; Dux, Paul E; Marois, René

    2011-09-01

    The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. Cognitive Psychology, 36, 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to interfere with the deployment of top-down attention [de Fockert, J. W., Rees, G., Frith, C. D., & Lavie, N. The role of working memory in visual selective attention. Science, 291, 1803-1806, 2001, doi:10.1126/science.1056496]. It is, therefore, possible that, in addition to delaying central processes, the engagement of working memory encoding (WME) also postpones perceptual processing as well. Here, we tested this hypothesis with time-resolved fMRI by assessing whether WME serially postpones the action of top-down attention on low-level sensory signals. In three experiments, participants viewed a skeletal rapid serial visual presentation sequence that contained two target items (T1 and T2) separated by either a short (550 msec) or long (1450 msec) SOA. During single-target runs, participants attended and responded only to T1, whereas in dual-target runs, participants attended and responded to both targets. To determine whether T1 processing delayed top-down attentional enhancement of T2, we examined T2 BOLD response in visual cortex by subtracting the single-task waveforms from the dual-task waveforms for each SOA. When the WME demands of T1 were high (Experiments 1 and 3), T2 BOLD response was delayed at the short SOA relative to the long SOA. This was not the case when T1 encoding demands were low (Experiment 2). We conclude that encoding of a stimulus into working memory delays the deployment of attention to subsequent target representations in visual cortex.

  8. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    PubMed

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).

  9. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer

    PubMed Central

    Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954

  10. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    PubMed

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  11. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  12. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  13. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  14. (Re)-wiring a brain with light: Clinical and visual processing findings after application of specific coloured glasses in patients with symptoms of a visual processing disorder (CVPD): Challenge of a possible new perspective?

    PubMed

    Friederichs, Edgar; Wahl, Siegfried

    2017-08-01

    The present investigation examined whether changes of electrophysiological late event related potential pattern could be used to reflect clinical changes from therapeutic intervention with coloured glasses in a group of patients with symptoms of central visual processing disorder. Subjects consisted of 13 patients with average age 16years (range 6-51years) with attention problems and learning disability, respectively. These patients were provided with specified coloured glasses which were required to be used during day time. Results indicated that specified coloured glasses significantly improved attention performance. Furthermore electrophysiological parameters revealed a significant change in the late event related potential distribution pattern (latency, amplitudes). This reflects a synchronization of together firing wired neural assemblies responsible for visual processing, suggesting an accelerated neuromaturation process when using coloured glasses. Our results suggest that the visual event related potentials measures are sensitive to changes in clinical development of patients with deficits of visual processing wearing appropriate coloured glasses. It will be discussed whether such a device might be useful for a clinical improvement of distraction symptoms caused by visual processing deficits. A model is presented explaining these effects by inducing the respiratory chain of the mitochondria such increasing the low energy levels of ATP of our patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    PubMed Central

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224

  16. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  17. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  18. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  19. Improving Readability of an Evaluation Tool for Low-Income Clients Using Visual Information Processing Theories

    ERIC Educational Resources Information Center

    Townsend, Marilyn S.; Sylva, Kathryn; Martin, Anna; Metz, Diane; Wooten-Swanson, Patti

    2008-01-01

    Literacy is an issue for many low-income audiences. Using visual information processing theories, the goal was improving readability of a food behavior checklist and ultimately improving its ability to accurately capture existing changes in dietary behaviors. Using group interviews, low-income clients (n = 18) evaluated 4 visual styles. The text…

  20. Combining Low-Level Perception and Expectations in Conceptual Learning

    DTIC Science & Technology

    2004-01-12

    human representation and processing of visual information. San Francisco: W. H. Freeman, 1982. U . Neisser , Cognitive Psychology. New York: Appleton...expectation that characters are from a standard alphabet enables correct identification of characters which would otherwise be ambiguous ( Neisser , 1966

  1. Tracking the evolution of crossmodal plasticity and visual functions before and after sight restoration

    PubMed Central

    Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno

    2014-01-01

    Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432

  2. Visual processing in Alzheimer's disease: surface detail and colour fail to aid object identification.

    PubMed

    Adlington, Rebecca L; Laws, Keith R; Gale, Tim M

    2009-10-01

    It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.

  3. Impaired downregulation of visual cortex during auditory processing is associated with autism symptomatology in children and adolescents with autism spectrum disorder.

    PubMed

    Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel

    2017-01-01

    Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  4. The role of pulvinar in the transmission of information in the visual hierarchy.

    PubMed

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.

  5. The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy

    PubMed Central

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750

  6. Phototaxis and the origin of visual eyes

    PubMed Central

    Randel, Nadine

    2016-01-01

    Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725

  7. Relative Spatial Frequency Processing Drives Hemispheric Asymmetry in Conscious Awareness

    PubMed Central

    Piazza, Elise A.; Silver, Michael A.

    2017-01-01

    Visual stimuli with different spatial frequencies (SFs) are processed asymmetrically in the two cerebral hemispheres. Specifically, low SFs are processed relatively more efficiently in the right hemisphere than the left hemisphere, whereas high SFs show the opposite pattern. In this study, we ask whether these differences between the two hemispheres reflect a low-level division that is based on absolute SF values or a flexible comparison of the SFs in the visual environment at any given time. In a recent study, we showed that conscious awareness of SF information (i.e., visual perceptual selection from multiple SFs simultaneously present in the environment) differs between the two hemispheres. Building upon that result, here we employed binocular rivalry to test whether this hemispheric asymmetry is due to absolute or relative SF processing. In each trial, participants viewed a pair of rivalrous orthogonal gratings of different SFs, presented either to the left or right of central fixation, and continuously reported which grating they perceived. We found that the hemispheric asymmetry in perception is significantly influenced by relative processing of the SFs of the simultaneously presented stimuli. For example, when a medium SF grating and a higher SF grating were presented as a rivalry pair, subjects were more likely to report that they initially perceived the medium SF grating when the rivalry pair was presented in the left visual hemifield (right hemisphere), compared to the right hemifield. However, this same medium SF grating, when it was paired in rivalry with a lower SF grating, was more likely to be perceptually selected when it was in the right visual hemifield (left hemisphere). Thus, the visual system’s classification of a given SF as “low” or “high” (and therefore, which hemisphere preferentially processes that SF) depends on the other SFs that are present, demonstrating that relative SF processing contributes to hemispheric differences in visual perceptual selection. PMID:28469585

  8. Subliminal number priming within and across the visual and auditory modalities.

    PubMed

    Kouider, Sid; Dehaene, Stanislas

    2009-01-01

    Whether masked number priming involves a low-level sensorimotor route or an amodal semantic level of processing remains highly debated. Several alternative interpretations have been put forward, proposing either that masked number priming is solely a byproduct of practice with numbers, or that stimulus awareness was underestimated. In a series of four experiments, we studied whether repetition and congruity priming for numbers reliably extend to novel (i.e., unpracticed) stimuli and whether priming transfers from a visual prime to an auditory target, even when carefully controlling for stimulus awareness. While we consistently observed cross-modal priming, the generalization to novel stimuli was weaker and reached significance only when considering the whole set of experiments. We conclude that number priming does involve an amodal, semantic level of processing, but is also modulated by task settings.

  9. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  10. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  11. Conscious Vision Proceeds from Global to Local Content in Goal-Directed Tasks and Spontaneous Vision.

    PubMed

    Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine

    2016-05-11

    The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.

  12. Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex

    PubMed Central

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V.

    2014-01-01

    In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception. PMID:25024190

  13. Dilution: atheoretical burden or just load? A reply to Tsal and Benoni (2010).

    PubMed

    Lavie, Nilli; Torralbo, Ana

    2010-12-01

    Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere "dilution") for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load.

  14. Dilution

    PubMed Central

    Lavie, Nilli; Torralbo, Ana

    2010-01-01

    Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere “dilution”) for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load. PMID:21133554

  15. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    PubMed

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  16. The singular nature of auditory and visual scene analysis in autism

    PubMed Central

    Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa

    2017-01-01

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025

  17. A computer-generated animated face stimulus set for psychophysiological research

    PubMed Central

    Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James

    2014-01-01

    Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164

  18. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  19. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097

  20. The incomplete angler: effects created by visual omission.

    PubMed

    Findlay, John M

    2008-01-01

    The Fisherman, a picture painted by Jean-Louis Forain, demonstrates an interesting interaction between low and high level perceptual processing. The isolation and tranquillity of the fisherman in the picture are enhanced by the absence of his reflection, yet perceivers are rarely aware of the omission.

  1. Electrophysiological Correlates of Subliminal Perception of Facial Expressions in Individuals with Autistic Traits: A Backward Masking Study

    PubMed Central

    Vukusic, Svjetlana; Ciorciari, Joseph; Crewther, David P.

    2017-01-01

    People with Autism spectrum disorder (ASD) show difficulty in social communication, especially in the rapid assessment of emotion in faces. This study examined the processing of emotional faces in typically developing adults with high and low levels of autistic traits (measured using the Autism Spectrum Quotient—AQ). Event-related potentials (ERPs) were recorded during viewing of backward-masked neutral, fearful and happy faces presented under two conditions: subliminal (16 ms, below the level of visual conscious awareness) and supraliminal (166 ms, above the time required for visual conscious awareness). Individuals with low and high AQ differed in the processing of subliminal faces, with the low AQ group showing an enhanced N2 amplitude for subliminal happy faces. Some group differences were found in the condition effects, with the Low AQ showing shorter frontal P3b and N4 latencies for subliminal vs. supraliminal condition. Although results did not show any group differences on the face-specific N170 component, there were shorter N170 latencies for supraliminal vs. subliminal conditions across groups. The results observed on the N2, showing group differences in subliminal emotion processing, suggest that decreased sensitivity to the reward value of social stimuli is a common feature both of people with ASD as well as people with high autistic traits from the normal population. PMID:28588465

  2. Electrophysiological Correlates of Subliminal Perception of Facial Expressions in Individuals with Autistic Traits: A Backward Masking Study.

    PubMed

    Vukusic, Svjetlana; Ciorciari, Joseph; Crewther, David P

    2017-01-01

    People with Autism spectrum disorder (ASD) show difficulty in social communication, especially in the rapid assessment of emotion in faces. This study examined the processing of emotional faces in typically developing adults with high and low levels of autistic traits (measured using the Autism Spectrum Quotient-AQ). Event-related potentials (ERPs) were recorded during viewing of backward-masked neutral, fearful and happy faces presented under two conditions: subliminal (16 ms, below the level of visual conscious awareness) and supraliminal (166 ms, above the time required for visual conscious awareness). Individuals with low and high AQ differed in the processing of subliminal faces, with the low AQ group showing an enhanced N2 amplitude for subliminal happy faces. Some group differences were found in the condition effects, with the Low AQ showing shorter frontal P3b and N4 latencies for subliminal vs. supraliminal condition. Although results did not show any group differences on the face-specific N170 component, there were shorter N170 latencies for supraliminal vs. subliminal conditions across groups. The results observed on the N2, showing group differences in subliminal emotion processing, suggest that decreased sensitivity to the reward value of social stimuli is a common feature both of people with ASD as well as people with high autistic traits from the normal population.

  3. Is fear perception special? Evidence at the level of decision-making and subjective confidence.

    PubMed

    Koizumi, Ai; Mobbs, Dean; Lau, Hakwan

    2016-11-01

    Fearful faces are believed to be prioritized in visual perception. However, it is unclear whether the processing of low-level facial features alone can facilitate such prioritization or whether higher-level mechanisms also contribute. We examined potential biases for fearful face perception at the levels of perceptual decision-making and perceptual confidence. We controlled for lower-level visual processing capacity by titrating luminance contrasts of backward masks, and the emotional intensity of fearful, angry and happy faces. Under these conditions, participants showed liberal biases in perceiving a fearful face, in both detection and discrimination tasks. This effect was stronger among individuals with reduced density in dorsolateral prefrontal cortex, a region linked to perceptual decision-making. Moreover, participants reported higher confidence when they accurately perceived a fearful face, suggesting that fearful faces may have privileged access to consciousness. Together, the results suggest that mechanisms in the prefrontal cortex contribute to making fearful face perception special. © The Author (2016). Published by Oxford University Press.

  4. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  5. Multivariate Patterns in the Human Object-Processing Pathway Reveal a Shift from Retinotopic to Shape Curvature Representations in Lateral Occipital Areas, LO-1 and LO-2.

    PubMed

    Vernon, Richard J W; Gouws, André D; Lawrence, Samuel J D; Wade, Alex R; Morland, Antony B

    2016-05-25

    Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to "functional" organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1-V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more "abstracted" representation, typically considered the preserve of "category-selective" extrastriate cortex, can nevertheless emerge in retinotopic regions. Visual areas are typically identified either through retinotopy (e.g., V1-V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. Copyright © 2016 Vernon et al.

  6. Multivariate Patterns in the Human Object-Processing Pathway Reveal a Shift from Retinotopic to Shape Curvature Representations in Lateral Occipital Areas, LO-1 and LO-2

    PubMed Central

    Vernon, Richard J. W.; Gouws, André D.; Lawrence, Samuel J. D.; Wade, Alex R.

    2016-01-01

    Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to “functional” organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1–V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more “abstracted” representation, typically considered the preserve of “category-selective” extrastriate cortex, can nevertheless emerge in retinotopic regions. SIGNIFICANCE STATEMENT Visual areas are typically identified either through retinotopy (e.g., V1–V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. PMID:27225766

  7. Metarhodopsin control by arrestin, light-filtering screening pigments, and visual pigment turnover in invertebrate microvillar photoreceptors.

    PubMed

    Stavenga, Doekele G; Hardie, Roger C

    2011-03-01

    The visual pigments of most invertebrate photoreceptors have two thermostable photo-interconvertible states, the ground state rhodopsin and photo-activated metarhodopsin, which triggers the phototransduction cascade until it binds arrestin. The ratio of the two states in photoequilibrium is determined by their absorbance spectra and the effective spectral distribution of illumination. Calculations indicate that metarhodopsin levels in fly photoreceptors are maintained below ~35% in normal diurnal environments, due to the combination of a blue-green rhodopsin, an orange-absorbing metarhodopsin and red transparent screening pigments. Slow metarhodopsin degradation and rhodopsin regeneration processes further subserve visual pigment maintenance. In most insect eyes, where the majority of photoreceptors have green-absorbing rhodopsins and blue-absorbing metarhodopsins, natural illuminants are predicted to create metarhodopsin levels greater than 60% at high intensities. However, fast metarhodopsin decay and rhodopsin regeneration also play an important role in controlling metarhodopsin in green receptors, resulting in a high rhodopsin content at low light intensities and a reduced overall visual pigment content in bright light. A simple model for the visual pigment-arrestin cycle is used to illustrate the dependence of the visual pigment population states on light intensity, arrestin levels and pigment turnover.

  8. Developing visual images for communicating information aboutantiretroviral side effects to a low-literate population.

    PubMed

    Dowse, Ros; Ramela, Thato; Barford, Kirsty-Lee; Browne, Sara

    2010-09-01

    The side effects of antiretroviral (ARV) therapy are linked to altered quality of life and adherence. Poor adherence has also been associated with low health-literacy skills, with an uninformed patient more likely to make ARV-related decisions that compromise the efficacy of the treatment. Low literacy skills disempower patients in interactions with healthcare providers and preclude the use of existing written patient information materials, which are generally written at a high reading level. Visual images or pictograms used as a counselling tool or included in patient information leaflets have been shown to improve patients' knowledge, particularly in low-literate groups. The objective of this study was to design visuals or pictograms illustrating various ARV side effects and to evaluate them in a low-literate South African Xhosa population. Core images were generated either from a design workshop or from posed photos or images from textbooks. The research team worked closely with a graphic artist. Initial versions of the images were discussed and assessed in group discussions, and then modified and eventually evaluated quantitatively in individual interviews with 40 participants who each had a maximum of 10 years of schooling. The familiarity of the human body, its facial expressions, postures and actions contextualised the information and contributed to the participants' understanding. Visuals that were simple, had a clear central focus and reflected familiar body experiences (e.g. vomiting) were highly successful. The introduction of abstract elements (e.g. fever) and metaphorical images (e.g. nightmares) presented problems for interpretation, particularly to those with the lowest educational levels. We recommend that such visual images should be designed in collaboration with the target population and a graphic artist, taking cognisance of the audience's literacy skills and culture, and should employ a multistage iterative process of modification and evaluation.

  9. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    PubMed

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  10. Terminal Information Processing System (TIPS) Consolidated CAB Display (CCD) Comparative Analysis.

    DTIC Science & Technology

    1982-04-01

    Barometric pressure 3. Center field wind speed, direction and gusts 4. Runway visual range 5. Low-level wind shear 6. Vortex advisory 7. Runway equipment...PASSWORD Command (standard user) u. PAUSE Command (standard user) v. PMSG Command (standard user) w. PPD Command (standard user) x. PURGE Command (standard

  11. Preserved subliminal processing and impaired conscious access in schizophrenia

    PubMed Central

    Del Cul, Antoine; Dehaene, Stanislas; Leboyer, Marion

    2006-01-01

    Background Studies of visual backward masking have frequently revealed an elevated masking threshold in schizophrenia. This finding has frequently been interpreted as indicating a low-level visual deficit. However, more recent models suggest that masking may also involve late and higher-level integrative processes, while leaving intact early “bottom-up” visual processing. Objectives We tested the hypothesis that the backward masking deficit in schizophrenia corresponds to a deficit in the late stages of conscious perception, whereas the subliminal processing of masked stimuli is fully preserved. Method 28 patients with schizophrenia and 28 normal controls performed two backward-masking experiments. We used Arabic digits as stimuli and varied quasi-continuously the interval with a subsequent mask, thus allowing us to progressively “unmask” the stimuli. We finely quantified their degree of visibility using both objective and subjective measures to evaluate the threshold duration for access to consciousness. We also studied the priming effect caused by the variably masked numbers on a comparison task performed on a subsequently presented and highly visible target number. Results The threshold delay between digit and mask necessary for the conscious perception of the masked stimulus was longer in patients compared to control subjects. This higher consciousness threshold in patients was confirmed by an objective and a subjective measure, and both measures were highly correlated for patients as well as for controls. However, subliminal priming of masked numbers was effective and identical in patients compared to controls. Conclusions Access to conscious report of masked stimuli is impaired in schizophrenia, while fast bottom-up processing of the same stimuli, as assessed by subliminal priming, is preserved. These findings suggest a high-level origin of the masking deficit in schizophrenia, although they leave open for further research its exact relation to previously identified bottom-up visual processing abnormalities. PMID:17146006

  12. AUVA - Augmented Reality Empowers Visual Analytics to explore Medical Curriculum Data.

    PubMed

    Nifakos, Sokratis; Vaitsis, Christos; Zary, Nabil

    2015-01-01

    Medical curriculum data play a key role in the structure and the organization of medical programs in Universities around the world. The effective processing and usage of these data may improve the educational environment of medical students. As a consequence, the new generation of health professionals would have improved skills from the previous ones. This study introduces the process of enhancing curriculum data by the use of augmented reality technology as a management and presentation tool. The final goal is to enrich the information presented from a visual analytics approach applied on medical curriculum data and to sustain low levels of complexity of understanding these data.

  13. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  14. Cognitive tunneling: use of visual information under stress.

    PubMed

    Dirkin, G R

    1983-02-01

    References to "tunnel vision" under stress are considered to describe a process of attentional, rather than visual, narrowing. The hypothesis of Easterbrook that the range of cue utilization is reduced under stress was tested with a primary task located in the visual periphery. High school volunteers performed a visual discrimination task with choice reaction time (RT) as the dependent variable. A 2 X 3 order of presentation by practice design, with repeated measures on the last factor, was employed. Two levels of stress, high and low, were operationalized by the subject's performing in the presence of an evaluative audience or alone. Pulse rate was employed as a manipulation check on arousal. The results partially supported the hypothesis that a peripherally visual primary task could be attended to under stress without decrement in performance.

  15. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  16. Electrocortical amplification for emotionally arousing natural scenes: The contribution of luminance and chromatic visual channels

    PubMed Central

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas

    2015-01-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949

  17. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  19. Query2Question: Translating Visualization Interaction into Natural Language.

    PubMed

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  20. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  1. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  2. Visually Evoked 3-5 Hz Membrane Potential Oscillations Reduce the Responsiveness of Visual Cortex Neurons in Awake Behaving Mice.

    PubMed

    Einstein, Michael C; Polack, Pierre-Olivier; Tran, Duy T; Golshani, Peyman

    2017-05-17

    Low-frequency membrane potential ( V m ) oscillations were once thought to only occur in sleeping and anesthetized states. Recently, low-frequency V m oscillations have been described in inactive awake animals, but it is unclear whether they shape sensory processing in neurons and whether they occur during active awake behavioral states. To answer these questions, we performed two-photon guided whole-cell V m recordings from primary visual cortex layer 2/3 excitatory and inhibitory neurons in awake mice during passive visual stimulation and performance of visual and auditory discrimination tasks. We recorded stereotyped 3-5 Hz V m oscillations where the V m baseline hyperpolarized as the V m underwent high amplitude rhythmic fluctuations lasting 1-2 s in duration. When 3-5 Hz V m oscillations coincided with visual cues, excitatory neuron responses to preferred cues were significantly reduced. Despite this disruption to sensory processing, visual cues were critical for evoking 3-5 Hz V m oscillations when animals performed discrimination tasks and passively viewed drifting grating stimuli. Using pupillometry and animal locomotive speed as indicators of arousal, we found that 3-5 Hz oscillations were not restricted to unaroused states and that they occurred equally in aroused and unaroused states. Therefore, low-frequency V m oscillations play a role in shaping sensory processing in visual cortical neurons, even during active wakefulness and decision making. SIGNIFICANCE STATEMENT A neuron's membrane potential ( V m ) strongly shapes how information is processed in sensory cortices of awake animals. Yet, very little is known about how low-frequency V m oscillations influence sensory processing and whether they occur in aroused awake animals. By performing two-photon guided whole-cell recordings from layer 2/3 excitatory and inhibitory neurons in the visual cortex of awake behaving animals, we found visually evoked stereotyped 3-5 Hz V m oscillations that disrupt excitatory responsiveness to visual stimuli. Moreover, these oscillations occurred when animals were in high and low arousal states as measured by animal speed and pupillometry. These findings show, for the first time, that low-frequency V m oscillations can significantly modulate sensory signal processing, even in awake active animals. Copyright © 2017 the authors 0270-6474/17/375084-15$15.00/0.

  3. The Effects of Visual Complexity for Japanese Kanji Processing with High and Low Frequencies

    ERIC Educational Resources Information Center

    Tamaoka, Katsuo; Kiyama, Sachiko

    2013-01-01

    The present study investigated the effects of visual complexity for kanji processing by selecting target kanji from different stroke ranges of visually simple (2-6 strokes), medium (8-12 strokes), and complex (14-20 strokes) kanji with high and low frequencies. A kanji lexical decision task in Experiment 1 and a kanji naming task in Experiment 2…

  4. Recovery of a crowded object by masking the flankers: Determining the locus of feature integration

    PubMed Central

    Chakravarthi, Ramakrishna; Cavanagh, Patrick

    2009-01-01

    Object recognition is a central function of the visual system. As a first step, the features of an object are registered; these independently encoded features are then bound together to form a single representation. Here we investigate the locus of this “feature integration” by examining crowding, a striking breakdown of this process. Crowding, an inability to identify a peripheral target surrounded by flankers, results from “excessive integration” of target and flanker features. We presented a standard crowding display with a target C flanked by four flanker C's in the periphery. We then masked only the flankers (but not the target) with one of three kinds of masks—noise, metacontrast, and object substitution—each of which interferes at progressively higher levels of visual processing. With noise and metacontrast masks (low-level masking), the crowded target was recovered, whereas with object substitution masks (high-level masking), it was not. This places a clear upper bound on the locus of interference in crowding suggesting that crowding is not a low-level phenomenon. We conclude that feature integration, which underlies crowding, occurs prior to the locus of object substitution masking. Further, our results indicate that the integrity of the flankers, but not their identification, is crucial for crowding to occur. PMID:19810785

  5. Do Rats Use Shape to Solve "Shape Discriminations"?

    ERIC Educational Resources Information Center

    Minini, Loredana; Jeffery, Kathryn J.

    2006-01-01

    Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did…

  6. The contribution of visual processing to academic achievement in adolescents born extremely preterm or extremely low birth weight.

    PubMed

    Molloy, Carly S; Di Battista, Ashley M; Anderson, Vicki A; Burnett, Alice; Lee, Katherine J; Roberts, Gehan; Cheong, Jeanie Ly; Anderson, Peter J; Doyle, Lex W

    2017-04-01

    Children born extremely preterm (EP, <28 weeks) and/or extremely low birth weight (ELBW, <1000 g) have more academic deficiencies than their term-born peers, which may be due to problems with visual processing. The aim of this study is to determine (1) if visual processing is related to poor academic outcomes in EP/ELBW adolescents, and (2) how much of the variance in academic achievement in EP/ELBW adolescents is explained by visual processing ability after controlling for perinatal risk factors and other known contributors to academic performance, particularly attention and working memory. A geographically determined cohort of 228 surviving EP/ELBW adolescents (mean age 17 years) was studied. The relationships between measures of visual processing (visual acuity, binocular stereopsis, eye convergence, and visual perception) and academic achievement were explored within the EP/ELBW group. Analyses were repeated controlling for perinatal and social risk, and measures of attention and working memory. It was found that visual acuity, convergence and visual perception are related to scores for academic achievement on univariable regression analyses. After controlling for potential confounds (perinatal and social risk, working memory and attention), visual acuity, convergence and visual perception remained associated with reading and math computation, but only convergence and visual perception are related to spelling. The additional variance explained by visual processing is up to 6.6% for reading, 2.7% for spelling, and 2.2% for math computation. None of the visual processing variables or visual motor integration are associated with handwriting on multivariable analysis. Working memory is generally a stronger predictor of reading, spelling, and math computation than visual processing. It was concluded that visual processing difficulties are significantly related to academic outcomes in EP/ELBW adolescents; therefore, specific attention should be paid to academic remediation strategies incorporating the management of working memory and visual processing in EP/ELBW children.

  7. Modulation of microsaccades by spatial frequency during object categorization.

    PubMed

    Craddock, Matt; Oppermann, Frank; Müller, Matthias M; Martinovic, Jasna

    2017-01-01

    The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Position Information Encoded by Population Activity in Hierarchical Visual Areas

    PubMed Central

    Majima, Kei; Horikawa, Tomoyasu

    2017-01-01

    Abstract Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes [V1–V4, lateral occipital complex (LOC), and fusiform face area (FFA)]. We collected functional magnetic resonance imaging (fMRI) responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RF centers. The results suggest that much position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes and is potentially available in later processing for recognition and behavior. PMID:28451634

  9. Visual White Matter Integrity in Schizophrenia

    PubMed Central

    Butler, Pamela D.; Hoptman, Matthew J.; Nierenberg, Jay; Foxe, John J.; Javitt, Daniel C.; Lim, Kelvin O.

    2007-01-01

    Objective Patients with schizophrenia have visual-processing deficits. This study examines visual white matter integrity as a potential mechanism for these deficits. Method Diffusion tensor imaging was used to examine white matter integrity at four levels of the visual system in 17 patients with schizophrenia and 21 comparison subjects. The levels examined were the optic radiations, the striate cortex, the inferior parietal lobule, and the fusiform gyrus. Results Schizophrenia patients showed a significant decrease in fractional anisotropy in the optic radiations but not in any other region. Conclusions This finding indicates that white matter integrity is more impaired at initial input, rather than at higher levels of the visual system, and supports the hypothesis that visual-processing deficits occur at the early stages of processing. PMID:17074957

  10. Does bimodal stimulus presentation increase ERP components usable in BCIs?

    NASA Astrophysics Data System (ADS)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.

    2012-08-01

    Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.

  11. Audiovisual speech perception development at varying levels of perceptual processing

    PubMed Central

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  12. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  13. Cognitive abilities predict death during the next 15 years in older Japanese adults.

    PubMed

    Nishita, Yukiko; Tange, Chikako; Tomida, Makiko; Otsuka, Rei; Ando, Fujiko; Shimokata, Hiroshi

    2017-10-01

    The longitudinal relationship between cognitive abilities and subsequent death was investigated among community-dwelling older Japanese adults. Participants (n = 1060; age range 60-79 years) comprised the first-wave participants of the National Institute for Longevity Sciences-Longitudinal Study of Aging. Participants' cognitive abilities were measured at baseline using the Japanese Wechsler Adult Intelligence Scale-Revised Short Form, which includes the following tests: Information (general knowledge), Similarities (logical abstract thinking), Picture Completion (visual perception and long-term visual memory) and Digit Symbol (information processing speed). By each cognitive test score, participants were classified into three groups: the high-level group (≥ the mean + 1SD), the low-level group (≤ the mean - 1SD) and the middle-level group. Data on death and moving during the subsequent 15 years were collected and analyzed using the multiple Cox proportional hazard model adjusted for physical and psychosocial covariates. During the follow-up period, 308 participants (29.06%) had died and 93 participants (8.77%) had moved. In the Similarities test, adjusted hazard ratios (HR) of the low-level group to the high-level group were significant (HR 1.49, 95% CI 1.02-2.17, P = 0.038). Furthermore, in the Digit symbol test, the adjusted HR of the low-level group to the high-level group was significant (HR 1.62, 95% CI 1.03-2.58, P = 0.038). Significant adjusted HR were not observed for the Information or Picture Completion tests. It is suggested that a lower level of logical abstract thinking and slower information processing speed are associated with shorter survival among older Japanese adults. Geriatr Gerontol Int 2017; 17: 1654-1660. © 2016 Japan Geriatrics Society.

  14. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  15. Neural processing of visual information under interocular suppression: a critical review

    PubMed Central

    Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido

    2014-01-01

    When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469

  16. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    PubMed

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  17. A Different View on the Checkerboard? Alterations in Early and Late Visually Evoked EEG Potentials in Asperger Observers

    PubMed Central

    Kornmeier, Juergen; Wörner, Rike; Riedel, Andreas; Bach, Michael; Tebartz van Elst, Ludger

    2014-01-01

    Background Asperger Autism is a lifelong psychiatric condition with highly circumscribed interests and routines, problems in social cognition, verbal and nonverbal communication, and also perceptual abnormalities with sensory hypersensitivity. To objectify both lower-level visual and cognitive alterations we looked for differences in visual event-related potentials (EEG) between Asperger observers and matched controls while they observed simple checkerboard stimuli. Methods In a balanced oddball paradigm checkerboards of two checksizes (0.6° and 1.2°) were presented with different frequencies. Participants counted the occurrence times of the rare fine or rare coarse checkerboards in different experimental conditions. We focused on early visual ERP differences as a function of checkerboard size and the classical P3b ERP component as an indicator of cognitive processing. Results We found an early (100–200 ms after stimulus onset) occipital ERP effect of checkerboard size (dominant spatial frequency). This effect was weaker in the Asperger than in the control observers. Further a typical parietal/central oddball-P3b occurred at 500 ms with the rare checkerboards. The P3b showed a right-hemispheric lateralization, which was more prominent in Asperger than in control observers. Discussion The difference in the early occipital ERP effect between the two groups may be a physiological marker of differences in the processing of small visual details in Asperger observers compared to normal controls. The stronger lateralization of the P3b in Asperger observers may indicate a stronger involvement of the right-hemispheric network of bottom-up attention. The lateralization of the P3b signal might be a compensatory consequence of the compromised early checksize effect. Higher-level analytical information processing units may need to compensate for difficulties in low-level signal analysis. PMID:24632708

  18. A different view on the checkerboard? Alterations in early and late visually evoked EEG potentials in Asperger observers.

    PubMed

    Kornmeier, Juergen; Wörner, Rike; Riedel, Andreas; Bach, Michael; Tebartz van Elst, Ludger

    2014-01-01

    Asperger Autism is a lifelong psychiatric condition with highly circumscribed interests and routines, problems in social cognition, verbal and nonverbal communication, and also perceptual abnormalities with sensory hypersensitivity. To objectify both lower-level visual and cognitive alterations we looked for differences in visual event-related potentials (EEG) between Asperger observers and matched controls while they observed simple checkerboard stimuli. In a balanced oddball paradigm checkerboards of two checksizes (0.6° and 1.2°) were presented with different frequencies. Participants counted the occurrence times of the rare fine or rare coarse checkerboards in different experimental conditions. We focused on early visual ERP differences as a function of checkerboard size and the classical P3b ERP component as an indicator of cognitive processing. We found an early (100-200 ms after stimulus onset) occipital ERP effect of checkerboard size (dominant spatial frequency). This effect was weaker in the Asperger than in the control observers. Further a typical parietal/central oddball-P3b occurred at 500 ms with the rare checkerboards. The P3b showed a right-hemispheric lateralization, which was more prominent in Asperger than in control observers. The difference in the early occipital ERP effect between the two groups may be a physiological marker of differences in the processing of small visual details in Asperger observers compared to normal controls. The stronger lateralization of the P3b in Asperger observers may indicate a stronger involvement of the right-hemispheric network of bottom-up attention. The lateralization of the P3b signal might be a compensatory consequence of the compromised early checksize effect. Higher-level analytical information processing units may need to compensate for difficulties in low-level signal analysis.

  19. Biomarkers of a five-domain translational substrate for schizophrenia and schizoaffective psychosis.

    PubMed

    Fryar-Williams, Stephanie; Strobel, Jörg E

    2015-01-01

    The Mental Health Biomarker Project (2010-2014) selected commercial biochemistry markers related to monoamine synthesis and metabolism and measures of visual and auditory processing performance. Within a case-control discovery design with exclusion criteria designed to produce a highly characterised sample, results from 67 independently DSM IV-R-diagnosed cases of schizophrenia and schizoaffective disorder were compared with those from 67 control participants selected from a local hospital, clinic and community catchment area. Participants underwent protocol-based diagnostic-checking, functional-rating, biological sample-collection for thirty candidate markers and sensory-processing assessment. Fifteen biomarkers were identified on ROC analysis. Using these biomarkers, odds ratios, adjusted for a case-control design, indicated that schizophrenia and schizoaffective disorder were highly associated with dichotic listening disorder, delayed visual processing, low visual span, delayed auditory speed of processing, low reverse digit span as a measure of auditory working memory and elevated levels of catecholamines. Other nutritional and biochemical biomarkers were identified as elevated hydroxyl pyrroline-2-one as a marker of oxidative stress, vitamin D, B6 and folate deficits with elevation of serum B12 and free serum copper to zinc ratio. When individual biomarkers were ranked by odds ratio and correlated with clinical severity, five functional domains of visual processing, auditory processing, oxidative stress, catecholamines and nutritional-biochemical variables were formed. When the strengths of their inter-domain relationships were predicted by Lowess (non-parametric) regression, predominant bidirectional relationships were found between visual processing and catecholamine domains. At a cellular level, the nutritional-biochemical domain exerted a pervasive influence on the auditory domain as well as on all other domains. The findings of this biomarker research point towards a much-required advance in Psychiatry: quantification of some theoretically-understandable, translationally-informative, treatment-relevant underpinnings of serious mental illness. This evidence reveals schizophrenia and schizoaffective disorder in a somewhat different manner, as a conglomerate of several disorders many of which are not currently being assessed-for or treated in clinical settings. Currently available remediation techniques for these underlying conditions have potential to reduce treatment-resistance, relapse-prevention, cost burden and social stigma in these conditions. If replicated and validated in prospective trials, such findings will improve progress-monitoring and treatment-response for schizophrenia and schizoaffective disorder.

  20. The role of effort in moderating the anxiety-performance relationship: Testing the prediction of processing efficiency theory in simulated rally driving.

    PubMed

    Wilson, Mark; Smith, Nickolas C; Chattington, Mark; Ford, Mike; Marple-Horvat, Dilwyn E

    2006-11-01

    We tested some of the key predictions of processing efficiency theory using a simulated rally driving task. Two groups of participants were classified as either dispositionally high or low anxious based on trait anxiety scores and trained on a simulated driving task. Participants then raced individually on two similar courses under counterbalanced experimental conditions designed to manipulate the level of anxiety experienced. The effort exerted on the driving tasks was assessed though self-report (RSME), psychophysiological measures (pupil dilation) and visual gaze data. Efficiency was measured in terms of efficiency of visual processing (search rate) and driving control (variability of wheel and accelerator pedal) indices. Driving performance was measured as the time taken to complete the course. As predicted, increased anxiety had a negative effect on processing efficiency as indexed by the self-report, pupillary response and variability of gaze data. Predicted differences due to dispositional levels of anxiety were also found in the driving control and effort data. Although both groups of drivers performed worse under the threatening condition, the performance of the high trait anxious individuals was affected to a greater extent by the anxiety manipulation than the performance of the low trait anxious drivers. The findings suggest that processing efficiency theory holds promise as a theoretical framework for examining the relationship between anxiety and performance in sport.

  1. Low Vision Rehabilitation for Adult African Americans in Two Settings.

    PubMed

    Draper, Erin M; Feng, Rui; Appel, Sarah D; Graboyes, Marcy; Engle, Erin; Ciner, Elise B; Ellenberg, Jonas H; Stambolian, Dwight

    2016-07-01

    The Vision Rehabilitation for African Americans with Central Vision Impairment (VISRAC) study is a demonstration project evaluating how modifications in vision rehabilitation can improve the use of functional vision. Fifty-five African Americans 40 years of age and older with central vision impairment were randomly assigned to receive either clinic-based (CB) or home-based (HB) low vision rehabilitation services. Forty-eight subjects completed the study. The primary outcome was the change in functional vision in activities of daily living, as assessed with the Veteran's Administration Low-Vision Visual Function Questionnaire (VFQ-48). This included scores for overall visual ability and visual ability domains (reading, mobility, visual information processing, and visual motor skills). Each score was normalized into logit estimates by Rasch analysis. Linear regression models were used to compare the difference in the total score and each domain score between the two intervention groups. The significance level for each comparison was set at 0.05. Both CB and HB groups showed significant improvement in overall visual ability at the final visit compared with baseline. The CB group showed greater improvement than the HB group (mean of 1.28 vs. 0.87 logits change), though the group difference is not significant (p = 0.057). The CB group visual motor skills score showed significant improvement over the HB group score (mean of 3.30 vs. 1.34 logits change, p = 0.044). The differences in improvement of the reading and visual information processing scores were not significant (p = 0.054 and p = 0.509) between groups. Neither group had significant improvement in the mobility score, which was not part of the rehabilitation program. Vision rehabilitation is effective for this study population regardless of location. Possible reasons why the CB group performed better than the HB group include a number of psychosocial factors as well as the more standardized distraction-free work environment within the clinic setting.

  2. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  3. Salience and Attention in Surprisal-Based Accounts of Language Processing.

    PubMed

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

  4. Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function

    PubMed Central

    Jiménez Barbosa, Ingrid Astrid

    2015-01-01

    Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026

  5. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  6. Top-down preparation modulates visual categorization but not subjective awareness of objects presented in natural backgrounds.

    PubMed

    Koivisto, Mika; Kahila, Ella

    2017-04-01

    Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Visual Modelling of Learning Processes

    ERIC Educational Resources Information Center

    Copperman, Elana; Beeri, Catriel; Ben-Zvi, Nava

    2007-01-01

    This paper introduces various visual models for the analysis and description of learning processes. The models analyse learning on two levels: the dynamic level (as a process over time) and the functional level. Two types of model for dynamic modelling are proposed: the session trace, which documents a specific learner in a particular learning…

  8. Visual noise disrupts conceptual integration in reading.

    PubMed

    Gao, Xuefei; Stine-Morrow, Elizabeth A L; Noh, Soo Rim; Eskew, Rhea T

    2011-02-01

    The Effortfulness Hypothesis suggests that sensory impairment (either simulated or age-related) may decrease capacity for semantic integration in language comprehension. We directly tested this hypothesis by measuring resource allocation to different levels of processing during reading (i.e., word vs. semantic analysis). College students read three sets of passages word-by-word, one at each of three levels of dynamic visual noise. There was a reliable interaction between processing level and noise, such that visual noise increased resources allocated to word-level processing, at the cost of attention paid to semantic analysis. Recall of the most important ideas also decreased with increasing visual noise. Results suggest that sensory challenge can impair higher-level cognitive functions in learning from text, supporting the Effortfulness Hypothesis.

  9. Skilled Deaf Readers have an Enhanced Perceptual Span in Reading

    PubMed Central

    Bélanger, Nathalie N.; Slattery, Timothy J.; Mayberry, Rachel I.; Rayner, Keith

    2013-01-01

    Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading. PMID:22683830

  10. Cognitive and artificial representations in handwriting recognition

    NASA Astrophysics Data System (ADS)

    Lenaghan, Andrew P.; Malyan, Ron

    1996-03-01

    Both cognitive processes and artificial recognition systems may be characterized by the forms of representation they build and manipulate. This paper looks at how handwriting is represented in current recognition systems and the psychological evidence for its representation in the cognitive processes responsible for reading. Empirical psychological work on feature extraction in early visual processing is surveyed to show that a sound psychological basis for feature extraction exists and to describe the features this approach leads to. The first stage of the development of an architecture for a handwriting recognition system which has been strongly influenced by the psychological evidence for the cognitive processes and representations used in early visual processing, is reported. This architecture builds a number of parallel low level feature maps from raw data. These feature maps are thresholded and a region labeling algorithm is used to generate sets of features. Fuzzy logic is used to quantify the uncertainty in the presence of individual features.

  11. The influence of action observation on action execution: Dissociating the contribution of action on perception, perception on action, and resolving conflict.

    PubMed

    Deschrijver, Eliane; Wiersema, Jan R; Brass, Marcel

    2017-04-01

    For more than 15 years, motor interference paradigms have been used to investigate the influence of action observation on action execution. Most research on so-called automatic imitation has focused on variables that play a modulating role or investigated potential confounding factors. Interestingly, furthermore, a number of functional magnetic resonance imaging (fMRI) studies have tried to shed light on the functional mechanisms and neural correlates involved in imitation inhibition. However, these fMRI studies, presumably due to poor temporal resolution, have primarily focused on high-level processes and have neglected the potential role of low-level motor and perceptual processes. In the current EEG study, we therefore aimed to disentangle the influence of low-level perceptual and motoric mechanisms from high-level cognitive mechanisms. We focused on potential congruency differences in the visual N190 - a component related to the processing of biological motion, the Readiness Potential - a component related to motor preparation, and the high-level P3 component. Interestingly, we detected congruency effects in each of these components, suggesting that the interference effect in an automatic imitation paradigm is not only related to high-level processes such as self-other distinction but also to more low-level influences of perception on action and action on perception. Moreover, we documented relationships of the neural effects with (autistic) behavior.

  12. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  13. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.

  14. Serendipity in Paperdoll Animation: Low Budget Methods for a Children's Workshop.

    ERIC Educational Resources Information Center

    Edwards, Emily D.

    Part of a larger project to design a production curriculum and measure the impact of this production activity on children's writing, visual thinking, and problem solving skills, a project developed an effective but inexpensive video for use in teaching animation processes to students at the elementary school level. The project used "cutout" or…

  15. Attentional selection of relative SF mediates global versus local processing: evidence from EEG.

    PubMed

    Flevaris, Anastasia V; Bentin, Shlomo; Robertson, Lynn C

    2011-06-13

    Previous research on functional hemispheric differences in visual processing has associated global perception with low spatial frequency (LSF) processing biases of the right hemisphere (RH) and local perception with high spatial frequency (HSF) processing biases of the left hemisphere (LH). The Double Filtering by Frequency (DFF) theory expanded this hypothesis by proposing that visual attention selects and is directed to relatively LSFs by the RH and relatively HSFs by the LH, suggesting a direct causal relationship between SF selection and global versus local perception. We tested this idea in the current experiment by comparing activity in the EEG recorded at posterior right and posterior left hemisphere sites while participants' attention was directed to global or local levels of processing after selection of relatively LSFs versus HSFs in a previous stimulus. Hemispheric asymmetry in the alpha band (8-12 Hz) during preparation for global versus local processing was modulated by the selected SF. In contrast, preparatory activity associated with selection of SF was not modulated by the previously attended level (global/local). These results support the DFF theory that top-down attentional selection of SF mediates global and local processing.

  16. Consciousness as a graded and an all-or-none phenomenon: A conceptual analysis.

    PubMed

    Windey, Bert; Cleeremans, Axel

    2015-09-01

    The issue whether consciousness is a graded or an all-or-none phenomenon has been and continues to be a debate. Both contradictory accounts are supported by solid evidence. Starting from a level of processing framework allowing for states of partial awareness, here we further elaborate our view that visual experience, as it is most often investigated in the literature, is both graded and all-or-none. Low-level visual experience is graded, whereas high-level visual experience is all-or-none. We then present a conceptual analysis starting from the notion that consciousness is a general concept. We specify a number of different subconcepts present in the literature on consciousness, and outline how each of them may be seen as either graded, all-or-none, or both. We argue that such specifications are necessary to lead to a detailed and integrated understanding of how consciousness should be conceived of as graded and all-or-none. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  18. No psychological effect of color context in a low level vision task

    PubMed Central

    Pedley, Adam; Wade, Alex R

    2013-01-01

    Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280

  19. No psychological effect of color context in a low level vision task.

    PubMed

    Pedley, Adam; Wade, Alex R

    2013-01-01

    A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η (2) = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas.

  20. The relationship between eating-related individual differences and visual attention to foods high in added fat and sugar.

    PubMed

    Gearhardt, Ashley N; Treat, Teresa A; Hollingworth, Andrew; Corbin, William R

    2012-12-01

    Attentional biases for food-related stimuli may be associated separately with obesity, disordered eating, and hunger. We tested an integrative model that simultaneously examines the association of body mass index (BMI), disordered eating and hunger with food-related visual attention to processed foods that differ in added fat/sugar level (e.g., sweets, candies, fried foods) relative to minimally processed foods (e.g., fruits, meats/nuts, vegetables) that are lower in fat/sugar content. One-hundred overweight or obese women, ages 18-50, completed a food-related visual search task and measures associated with eating behavior. Height and weight were measured. Higher levels of hunger significantly predicted increased vigilance for sweets and candy and increased vigilance for fried foods at a trend level. Elevated hunger was associated significantly with decreased dwell time on fried foods and, at a trend level, with decreased dwell time on sweets. Higher BMIs emerged as a significant predictor of decreased vigilance for fried foods, but BMI was not related to dwell time. Disordered eating was unrelated to vigilance for or dwell time on unhealthy food types. This pattern of findings suggests that low-level attentional biases may contribute to difficulties making healthier food choices in the current food environment and may point toward useful strategies to reduce excess food consumption. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Does functional vision behave differently in low-vision patients with diabetic retinopathy?--A case-matched study.

    PubMed

    Ahmadian, Lohrasb; Massof, Robert

    2008-09-01

    A retrospective case-matched study designed to compare patients with diabetic retinopathy (DR) and other ocular diseases, managed in a low-vision clinic, in four different types of functional vision. Reading, mobility, visual motor, and visual information processing were measured in the patients (n = 114) and compared with those in patients with other ocular diseases (n = 114) matched in sex, visual acuity (VA), general health status, and age, using the Activity Inventory as a Rasch-scaled measurement tool. Binocular distance visual acuity was categorized as normal (20/12.5-20/25), near normal (20/32-20/63), moderate (20/80-20/160), severe (20/200-20/400), profound (20/500-20/1000), and total blindness (20/1250 to no light perception). Both Wilcoxon matched pairs signed rank test and the sign test of matched pairs were used to compare estimated functional vision measures between DR cases and controls. Cases ranged in age from 19 to 90 years (mean age, 67.5), and 59% were women. The mean visual acuity (logMar scale) was 0.7. Based on the Wilcoxon signed rank test analyses and after adjusting the probability for multiple comparisons, there was no statistically significant difference (P > 0.05) between patients with DR and control subjects in any of four functional visions. Furthermore, diabetic retinopathy patients did not differ (P > 0.05) from their matched counterparts in goal-level vision-related functional ability and total visual ability. Visual impairment in patients with DR appears to be a generic and non-disease-specific outcome that can be explained mainly by the end impact of the disease in the patients' daily lives and not by the unique disease process that results in the visual impairment.

  2. The nature-disorder paradox: A perceptual study on how nature is disorderly yet aesthetically preferred.

    PubMed

    Kotabe, Hiroki P; Kardan, Omid; Berman, Marc G

    2017-08-01

    Natural environments have powerful aesthetic appeal linked to their capacity for psychological restoration. In contrast, disorderly environments are aesthetically aversive, and have various detrimental psychological effects. But in our research, we have repeatedly found that natural environments are perceptually disorderly. What could explain this paradox? We present 3 competing hypotheses: the aesthetic preference for naturalness is more powerful than the aesthetic aversion to disorder (the nature-trumps-disorder hypothesis ); disorder is trivial to aesthetic preference in natural contexts (the harmless-disorder hypothesis ); and disorder is aesthetically preferred in natural contexts (the beneficial-disorder hypothesis ). Utilizing novel methods of perceptual study and diverse stimuli, we rule in the nature-trumps-disorder hypothesis and rule out the harmless-disorder and beneficial-disorder hypotheses. In examining perceptual mechanisms, we find evidence that high-level scene semantics are both necessary and sufficient for the nature-trumps-disorder effect. Necessity is evidenced by the effect disappearing in experiments utilizing only low-level visual stimuli (i.e., where scene semantics have been removed) and experiments utilizing a rapid-scene-presentation procedure that obscures scene semantics. Sufficiency is evidenced by the effect reappearing in experiments utilizing noun stimuli which remove low-level visual features. Furthermore, we present evidence that the interaction of scene semantics with low-level visual features amplifies the nature-trumps-disorder effect-the effect is weaker both when statistically adjusting for quantified low-level visual features and when using noun stimuli which remove low-level visual features. These results have implications for psychological theories bearing on the joint influence of low- and high-level perceptual inputs on affect and cognition, as well as for aesthetic design. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading

    PubMed Central

    O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.

    2017-01-01

    Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363

  4. Visuocortical Changes During Delay and Trace Aversive Conditioning: Evidence From Steady-State Visual Evoked Potentials

    PubMed Central

    Miskovic, Vladimir; Keil, Andreas

    2015-01-01

    The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582

  5. Visual analysis of trash bin processing on garbage trucks in low resolution video

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  6. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  7. No Pixel Left Behind - Peeling Away NASA's Satellite Swaths

    NASA Astrophysics Data System (ADS)

    Cechini, M. F.; Boller, R. A.; Schmaltz, J. E.; Roberts, J. T.; Alarcon, C.; Huang, T.; McGann, M.; Murphy, K. J.

    2014-12-01

    Discovery and identification of Earth Science products should not be the majority effort of scientific research. Search aides based on text metadata go to great lengths to simplify this process. However, the process is still cumbersome and requires too much data download and analysis to down select to valid products. The EOSDIS Global Imagery Browse Services (GIBS) is attempting to improve this process by providing "visual metadata" in the form of full-resolution visualizations representing geophysical parameters taken directly fromt he data. Through the use of accompanying interpretive information such as color legends and the natural visual processing of the human eye, researchers are able to search and filter through data products in a more natural and efficient way. The GIBS "visual metadata" products are generated as representations of Level 3 data or as temporal composites of the Level 2 granule- or swath-based data products projected across a geographic or polar region. Such an approach allows for low-latency tiled access to pre-generated imagery products. For many GIBS users, the resulting image suffices for a basic representation of the underlying data. However, composite imagery presents an insurmountable problem: for areas of spatial overlap within the composite, only one observation is visually represented. This is especially problematic in the polar regions where a significant portion of sensed data is "lost." In response to its user community, the GIBS team coordinated with its stakeholders to begin developing an approach to ensure that there is "no pixel left behind." In this presentation we will discuss the use cases and requirements guiding our efforts, considerations regarding standards compliance and interoperability, and near term goals. We will also discuss opportunities to actively engage with the GIBS team on this topic to continually improve our services.

  8. The visual divide

    NASA Astrophysics Data System (ADS)

    Maes, Alfons

    2017-04-01

    Climate change is a playground for visualization. Yet research and technological innovations in visual communication and data visualization do not account for a substantial part of the world's population: vulnerable audiences with low levels of literacy.

  9. The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networks.

    PubMed

    Bankson, B B; Hebart, M N; Groen, I I A; Baker, C I

    2018-05-17

    Visual object representations are commonly thought to emerge rapidly, yet it has remained unclear to what extent early brain responses reflect purely low-level visual features of these objects and how strongly those features contribute to later categorical or conceptual representations. Here, we aimed to estimate a lower temporal bound for the emergence of conceptual representations by defining two criteria that characterize such representations: 1) conceptual object representations should generalize across different exemplars of the same object, and 2) these representations should reflect high-level behavioral judgments. To test these criteria, we compared magnetoencephalography (MEG) recordings between two groups of participants (n = 16 per group) exposed to different exemplar images of the same object concepts. Further, we disentangled low-level from high-level MEG responses by estimating the unique and shared contribution of models of behavioral judgments, semantics, and different layers of deep neural networks of visual object processing. We find that 1) both generalization across exemplars as well as generalization of object-related signals across time increase after 150 ms, peaking around 230 ms; 2) representations specific to behavioral judgments emerged rapidly, peaking around 160 ms. Collectively, these results suggest a lower bound for the emergence of conceptual object representations around 150 ms following stimulus onset. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  11. Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients.

    PubMed

    Rouger, Julien; Lagleyre, Sébastien; Démonet, Jean-François; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2012-08-01

    Psychophysical and neuroimaging studies in both animal and human subjects have clearly demonstrated that cortical plasticity following sensory deprivation leads to a brain functional reorganization that favors the spared modalities. In postlingually deaf patients, the use of a cochlear implant (CI) allows a recovery of the auditory function, which will probably counteract the cortical crossmodal reorganization induced by hearing loss. To study the dynamics of such reversed crossmodal plasticity, we designed a longitudinal neuroimaging study involving the follow-up of 10 postlingually deaf adult CI users engaged in a visual speechreading task. While speechreading activates Broca's area in normally hearing subjects (NHS), the activity level elicited in this region in CI patients is abnormally low and increases progressively with post-implantation time. Furthermore, speechreading in CI patients induces abnormal crossmodal activations in right anterior regions of the superior temporal cortex normally devoted to processing human voice stimuli (temporal voice-sensitive areas-TVA). These abnormal activity levels diminish with post-implantation time and tend towards the levels observed in NHS. First, our study revealed that the neuroplasticity after cochlear implantation involves not only auditory but also visual and audiovisual speech processing networks. Second, our results suggest that during deafness, the functional links between cortical regions specialized in face and voice processing are reallocated to support speech-related visual processing through cross-modal reorganization. Such reorganization allows a more efficient audiovisual integration of speech after cochlear implantation. These compensatory sensory strategies are later completed by the progressive restoration of the visuo-audio-motor speech processing loop, including Broca's area. Copyright © 2011 Wiley Periodicals, Inc.

  12. Effect of physical workload and modality of information presentation on pattern recognition and navigation task performance by high-fit young males.

    PubMed

    Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David

    2017-11-01

    Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.

  13. Night vision in barn owls: visual acuity and contrast sensitivity under dark adaptation.

    PubMed

    Orlowski, Julius; Harmening, Wolf; Wagner, Hermann

    2012-12-06

    Barn owls are effective nocturnal predators. We tested their visual performance at low light levels and determined visual acuity and contrast sensitivity of three barn owls by their behavior at stimulus luminances ranging from photopic to fully scotopic levels (23.5 to 1.5 × 10⁻⁶). Contrast sensitivity and visual acuity decreased only slightly from photopic to scotopic conditions. Peak grating acuity was at mesopic (4 × 10⁻² cd/m²) conditions. Barn owls retained a quarter of their maximal acuity when luminance decreased by 5.5 log units. We argue that the visual system of barn owls is designed to yield as much visual acuity under low light conditions as possible, thereby sacrificing resolution at photopic conditions.

  14. Gestalten of today: early processing of visual contours and surfaces.

    PubMed

    Kovács, I

    1996-12-01

    While much is known about the specialized, parallel processing streams of low-level vision that extract primary visual cues, there is only limited knowledge about the dynamic interactions between them. How are the fragments, caught by local analyzers, assembled together to provide us with a unified percept? How are local discontinuities in texture, motion or depth evaluated with respect to object boundaries and surface properties? These questions are presented within the framework of orientation-specific spatial interactions of early vision. Key observations of psychophysics, anatomy and neurophysiology on interactions of various spatial and temporal ranges are reviewed. Aspects of the functional architecture and possible neural substrates of local orientation-specific interactions are discussed, underlining their role in the integration of information across the visual field, and particularly in contour integration. Examples are provided demonstrating that global context, such as contour closure and figure-ground assignment, affects these local interactions. It is illustrated that figure-ground assignment is realized early in visual processing, and that the pattern of early interactions also brings about an effective and sparse coding of visual shape. Finally, it is concluded that the underlying functional architecture is not only dynamic and context dependent, but the pattern of connectivity depends as much on past experience as on actual stimulation.

  15. Representational dynamics of object recognition: Feedforward and feedback information flows.

    PubMed

    Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra

    2016-03-01

    Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Retrieval and Monitoring Processes during Visual Working Memory: An ERP Study of the Benefit of Visual Semantics

    PubMed Central

    Orme, Elizabeth; Brown, Louise A.; Riby, Leigh M.

    2017-01-01

    In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400–800 ms) and late posterior negativity (LPN; 500–900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively ‘pure’ and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively. PMID:28725203

  17. Retrieval and Monitoring Processes during Visual Working Memory: An ERP Study of the Benefit of Visual Semantics.

    PubMed

    Orme, Elizabeth; Brown, Louise A; Riby, Leigh M

    2017-01-01

    In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400-800 ms) and late posterior negativity (LPN; 500-900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively 'pure' and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively.

  18. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  19. Active confocal imaging for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli

    2014-01-01

    There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710

  20. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  1. Visual Processing Deficits in Children with Slow RAN Performance

    ERIC Educational Resources Information Center

    Stainthorp, Rhona; Stuart, Morag; Powell, Daisy; Quinlan, Philip; Garwood, Holly

    2010-01-01

    Two groups of 8- to 10-year-olds differing in rapid automatized naming speed but matched for age, verbal and nonverbal ability, phonological awareness, phonological memory, and visual acuity participated in four experiments investigating early visual processing. As low RAN children had significantly slower simple reaction times (SRT) this was…

  2. Successful Outcomes from a Structured Curriculum Used in the Veterans Affairs Low Vision Intervention Trial

    ERIC Educational Resources Information Center

    Stelmack, Joan A.; Rinne, Stephen; Mancil, Rickilyn M.; Dean, Deborah; Moran, D'Anna; Tang, X. Charlene; Cummings, Roger; Massof, Robert W.

    2008-01-01

    A low vision rehabilitation program with a structured curriculum was evaluated in a randomized controlled trial. The treatment group demonstrated large improvements in self-reported visual function (reading, mobility, visual information processing, visual motor skills, and overall). The team approach and the protocols of the treatment program are…

  3. Visual local and global processing in low-functioning deaf individuals with and without autism spectrum disorder.

    PubMed

    Maljaars, J P W; Noens, I L J; Scholte, E M; Verpoorten, R A W; van Berckelaer-Onnes, I A

    2011-01-01

    The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the weak central coherence account. The main focus of the present study is to examine whether enhanced visual perception is also present in low-functioning deaf individuals with and without ASD compared with individuals with ID, and to evaluate the underlying cognitive style in deaf and hearing individuals with ASD. Different sorting tasks (selected from the ComFor) were administered from four subsamples: (1) individuals with ID (n = 68); (2) individuals with ID and ASD (n = 72); (3) individuals with ID and deafness (n = 22); and (4) individuals with ID, ASD and deafness (n = 15). Differences in performance on sorting tasks with meaningful and meaningless materials between the four subgroups were analysed. Age and level of functioning were taken into account. Analyses of covariance revealed that results of deaf individuals with ID and ASD are in line with the results of hearing individuals with ID and ASD. Both groups showed enhanced visual perception, especially on meaningless sorting tasks, when compared with hearing individuals with ID, but not compared with deaf individuals with ID. In ASD either with or without deafness, enhanced visual perception for meaningless information can be understood within the framework of the central coherence theory, whereas in deafness, enhancement in visual perception might be due to a more generally enhanced visual perception as a result of auditory deprivation. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  4. Salience and Attention in Surprisal-Based Accounts of Language Processing

    PubMed Central

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus. PMID:27375525

  5. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    PubMed

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  6. Sensory processing patterns predict the integration of information held in visual working memory.

    PubMed

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Sensitivity to synchronicity of biological motion in normal and amblyopic vision

    PubMed Central

    Luu, Jennifer Y.; Levi, Dennis M.

    2017-01-01

    Amblyopia is a developmental disorder of spatial vision that results from abnormal early visual experience usually due to the presence of strabismus, anisometropia, or both strabismus and anisometropia. Amblyopia results in a range of visual deficits that cannot be corrected by optics because the deficits reflect neural abnormalities. Biological motion refers to the motion patterns of living organisms, and is normally displayed as points of lights positioned at the major joints of the body. In this experiment, our goal was twofold. We wished to examine whether the human visual system in people with amblyopia retained the higher-level processing capabilities to extract visual information from the synchronized actions of others, therefore retaining the ability to detect biological motion. Specifically, we wanted to determine if the synchronized interaction of two agents performing a dancing routine allowed the amblyopic observer to use the actions of one agent to predict the expected actions of a second agent. We also wished to establish whether synchronicity sensitivity (detection of synchronized versus desynchronized interactions) is impaired in amblyopic observers relative to normal observers. The two aims are differentiated in that the first aim looks at whether synchronized actions result in improved expected action predictions while the second aim quantitatively compares synchronicity sensitivity, or the ratio of desynchronized to synchronized detection sensitivities, to determine if there is a difference between normal and amblyopic observers. Our results show that the ability to detect biological motion requires more samples in both eyes of amblyopes than in normal control observers. The increased sample threshold is not the result of low-level losses but may reflect losses in feature integration due to undersampling in the amblyopic visual system. However, like normal observers, amblyopes are more sensitive to synchronized versus desynchronized interactions, indicating that higher-level processing of biological motion remains intact. We also found no impairment in synchronicity sensitivity in the amblyopic visual system relative to the normal visual system. Since there is no impairment in synchronicity sensitivity in either the nonamblyopic or amblyopic eye of amblyopes, our results suggest that the higher order processing of biological motion is intact. PMID:23474301

  8. Early-Stage Visual Processing and Cortical Amplification Deficits in Schizophrenia

    PubMed Central

    Butler, Pamela D.; Zemon, Vance; Schechter, Isaac; Saperstein, Alice M.; Hoptman, Matthew J.; Lim, Kelvin O.; Revheim, Nadine; Silipo, Gail; Javitt, Daniel C.

    2005-01-01

    Background Patients with schizophrenia show deficits in early-stage visual processing, potentially reflecting dysfunction of the magnocellular visual pathway. The magnocellular system operates normally in a nonlinear amplification mode mediated by glutamatergic (N-methyl-d-aspartate) receptors. Investigating magnocellular dysfunction in schizophrenia therefore permits evaluation of underlying etiologic hypotheses. Objectives To evaluate magnocellular dysfunction in schizophrenia, relative to known neurochemical and neuroanatomical substrates, and to examine relationships between electrophysiological and behavioral measures of visual pathway dysfunction and relationships with higher cognitive deficits. Design, Setting, and Participants Between-group study at an inpatient state psychiatric hospital and out-patient county psychiatric facilities. Thirty-three patients met DSM-IV criteria for schizophrenia or schizoaffective disorder, and 21 nonpsychiatric volunteers of similar ages composed the control group. Main Outcome Measures (1) Magnocellular and parvocellular evoked potentials, analyzed using nonlinear (Michaelis-Menten) and linear contrast gain approaches; (2) behavioral contrast sensitivity measures; (3) white matter integrity; (4) visual and nonvisual neuropsychological measures, and (5) clinical symptom and community functioning measures. Results Patients generated evoked potentials that were significantly reduced in response to magnocellular-biased, but not parvocellular-biased, stimuli (P=.001). Michaelis-Menten analyses demonstrated reduced contrast gain of the magnocellular system (P=.001). Patients showed decreased contrast sensitivity to magnocellular-biased stimuli (P<.001). Evoked potential deficits were significantly related to decreased white matter integrity in the optic radiations (P<.03). Evoked potential deficits predicted impaired contrast sensitivity (P=.002), which was in turn related to deficits in complex visual processing (P≤.04). Both evoked potential (P≤.04) and contrast sensitivity (P=.01) measures significantly predicted community functioning. Conclusions These findings confirm the existence of early-stage visual processing dysfunction in schizophrenia and provide the first evidence that such deficits are due to decreased nonlinear signal amplification, consistent with glutamatergic theories. Neuroimaging studies support the hypothesis of dysfunction within low-level visual pathways involving thalamocortical radiations. Deficits in early-stage visual processing significantly predict higher cognitive deficits. PMID:15867102

  9. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Neuropsychological Components of Imagery Processing, Final Technical Report.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.

    High-level visual processes make use of stored information, and are invoked during object identification, navigation, tracking, and visual mental imagery. The work presented in this document has resulted in a theory of the component "processing subsystems" used in high-level vision. This theory was developed by considering…

  11. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    PubMed

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  12. Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.

    PubMed

    Breitmeyer, Bruno G

    2015-09-01

    Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Neural organization and visual processing in the anterior optic tubercle of the honeybee brain.

    PubMed

    Mota, Theo; Yamagata, Nobuhiro; Giurfa, Martin; Gronenberg, Wulfila; Sandoz, Jean-Christophe

    2011-08-10

    The honeybee Apis mellifera represents a valuable model for studying the neural segregation and integration of visual information. Vision in honeybees has been extensively studied at the behavioral level and, to a lesser degree, at the physiological level using intracellular electrophysiological recordings of single neurons. However, our knowledge of visual processing in honeybees is still limited by the lack of functional studies of visual processing at the circuit level. Here we contribute to filling this gap by providing a neuroanatomical and neurophysiological characterization at the circuit level of a practically unstudied visual area of the bee brain, the anterior optic tubercle (AOTu). First, we analyzed the internal organization and neuronal connections of the AOTu. Second, we established a novel protocol for performing optophysiological recordings of visual circuit activity in the honeybee brain and studied the responses of AOTu interneurons during stimulation of distinct eye regions. Our neuroanatomical data show an intricate compartmentalization and connectivity of the AOTu, revealing a dorsoventral segregation of the visual input to the AOTu. Light stimuli presented in different parts of the visual field (dorsal, lateral, or ventral) induce distinct patterns of activation in AOTu output interneurons, retaining to some extent the dorsoventral input segregation revealed by our neuroanatomical data. In particular, activity patterns evoked by dorsal and ventral eye stimulation are clearly segregated into distinct AOTu subunits. Our results therefore suggest an involvement of the AOTu in the processing of dorsoventrally segregated visual information in the honeybee brain.

  14. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  15. Residual perception of biological motion in cortical blindness.

    PubMed

    Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto

    2016-12-01

    From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Finding regions of interest in pathological images: an attentional model approach

    NASA Astrophysics Data System (ADS)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  17. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  18. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

    PubMed Central

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-01-01

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified “sensory” networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom–up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches. PMID:23202431

  19. Psilocybin impairs high-level but not low-level motion perception.

    PubMed

    Carter, Olivia L; Pettigrew, John D; Burr, David C; Alais, David; Hasler, Felix; Vollenweider, Franz X

    2004-08-26

    The hallucinogenic serotonin(1A&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.

  20. Cholinergic enhancement of visual attention and neural oscillations in the human brain.

    PubMed

    Bauer, Markus; Kluge, Christian; Bach, Dominik; Bradbury, David; Heinze, Hans Jochen; Dolan, Raymond J; Driver, Jon

    2012-03-06

    Cognitive processes such as visual perception and selective attention induce specific patterns of brain oscillations. The neurochemical bases of these spectral changes in neural activity are largely unknown, but neuromodulators are thought to regulate processing. The cholinergic system is linked to attentional function in vivo, whereas separate in vitro studies show that cholinergic agonists induce high-frequency oscillations in slice preparations. This has led to theoretical proposals that cholinergic enhancement of visual attention might operate via gamma oscillations in visual cortex, although low-frequency alpha/beta modulation may also play a key role. Here we used MEG to record cortical oscillations in the context of administration of a cholinergic agonist (physostigmine) during a spatial visual attention task in humans. This cholinergic agonist enhanced spatial attention effects on low-frequency alpha/beta oscillations in visual cortex, an effect correlating with a drug-induced speeding of performance. By contrast, the cholinergic agonist did not alter high-frequency gamma oscillations in visual cortex. Thus, our findings show that cholinergic neuromodulation enhances attentional selection via an impact on oscillatory synchrony in visual cortex, for low rather than high frequencies. We discuss this dissociation between high- and low-frequency oscillations in relation to proposals that lower-frequency oscillations are generated by feedback pathways within visual cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. A deep (learning) dive into visual search behaviour of breast radiologists

    NASA Astrophysics Data System (ADS)

    Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia

    2018-03-01

    Visual search, the process of detecting and identifying objects using the eye movements (saccades) and the foveal vision, has been studied for identification of root causes of errors in the interpretation of mammography. The aim of this study is to model visual search behaviour of radiologists and their interpretation of mammograms using deep machine learning approaches. Our model is based on a deep convolutional neural network, a biologically-inspired multilayer perceptron that simulates the visual cortex, and is reinforced with transfer learning techniques. Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists' visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists' assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists' decision; 2) Radiologists' confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.

  2. Bullying in German Adolescents: Attending Special School for Students with Visual Impairment

    ERIC Educational Resources Information Center

    Pinquart, Martin; Pfeiffer, Jens P.

    2011-01-01

    The present study analysed bullying in German adolescents with and without visual impairment. Ninety-eight adolescents with vision loss from schools for students with visual impairment, of whom 31 were blind and 67 had low vision, were compared with 98 sighted peers using a matched-pair design. Students with low vision reported higher levels of…

  3. Optimization of Visual Information Presentation for Visual Prosthesis.

    PubMed

    Guo, Fei; Yang, Yuan; Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  4. Optimization of Visual Information Presentation for Visual Prosthesis

    PubMed Central

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  5. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    PubMed

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.

  6. Individual Alpha Peak Frequency Predicts 10 Hz Flicker Effects on Selective Attention.

    PubMed

    Gulbinaite, Rasa; van Viegen, Tara; Wieling, Martijn; Cohen, Michael X; VanRullen, Rufin

    2017-10-18

    Rhythmic visual stimulation ("flicker") is primarily used to "tag" processing of low-level visual and high-level cognitive phenomena. However, preliminary evidence suggests that flicker may also entrain endogenous brain oscillations, thereby modulating cognitive processes supported by those brain rhythms. Here we tested the interaction between 10 Hz flicker and endogenous alpha-band (∼10 Hz) oscillations during a selective visuospatial attention task. We recorded EEG from human participants (both genders) while they performed a modified Eriksen flanker task in which distractors and targets flickered within (10 Hz) or outside (7.5 or 15 Hz) the alpha band. By using a combination of EEG source separation, time-frequency, and single-trial linear mixed-effects modeling, we demonstrate that 10 Hz flicker interfered with stimulus processing more on incongruent than congruent trials (high vs low selective attention demands). Crucially, the effect of 10 Hz flicker on task performance was predicted by the distance between 10 Hz and individual alpha peak frequency (estimated during the task). Finally, the flicker effect on task performance was more strongly predicted by EEG flicker responses during stimulus processing than during preparation for the upcoming stimulus, suggesting that 10 Hz flicker interfered more with reactive than proactive selective attention. These findings are consistent with our hypothesis that visual flicker entrained endogenous alpha-band networks, which in turn impaired task performance. Our findings also provide novel evidence for frequency-dependent exogenous modulation of cognition that is determined by the correspondence between the exogenous flicker frequency and the endogenous brain rhythms. SIGNIFICANCE STATEMENT Here we provide novel evidence that the interaction between exogenous rhythmic visual stimulation and endogenous brain rhythms can have frequency-specific behavioral effects. We show that alpha-band (10 Hz) flicker impairs stimulus processing in a selective attention task when the stimulus flicker rate matches individual alpha peak frequency. The effect of sensory flicker on task performance was stronger when selective attention demands were high, and was stronger during stimulus processing and response selection compared with the prestimulus anticipatory period. These findings provide novel evidence that frequency-specific sensory flicker affects online attentional processing, and also demonstrate that the correspondence between exogenous and endogenous rhythms is an overlooked prerequisite when testing for frequency-specific cognitive effects of flicker. Copyright © 2017 the authors 0270-6474/17/3710173-12$15.00/0.

  7. Design Considerations for Remotely Operated Welding in Space: Task Definition and Visual Weld Monitoring Experiment

    DTIC Science & Technology

    1993-05-01

    processes [48] ................ 91 Figure 4.14 Energy effectiveness comparison between EBW, GMAW , and PAW [48...1 10 Figure 5.2 The spectrum of control modes [76] ................. 112 Figure 5.3 Levels of control for GMAW [26...vehicular activity FTS Flight Telerobotic Servicer GMAW Gas metal arc welding GTAW Gas tungsten arc welding LEO Low-earth orbit NDT Non-destructive test

  8. Manipulating and Visualizing Molecular Interactions in Customized Nanoscale Spaces

    NASA Astrophysics Data System (ADS)

    Stabile, Francis; Henkin, Gil; Berard, Daniel; Shayegan, Marjan; Leith, Jason; Leslie, Sabrina

    We present a dynamically adjustable nanofluidic platform for formatting the conformations of and visualizing the interaction kinetics between biomolecules in solution, offering new time resolution and control of the reaction processes. This platform extends convex lens-induced confinement (CLiC), a technique for imaging molecules under confinement, by introducing a system for in situ modification of the chemical environment; this system uses a deep microchannel to diffusively exchange reagents within the nanoscale imaging region, whose height is fixed by a nanopost array. To illustrate, we visualize and manipulate salt-induced, surfactant-induced, and enzyme-induced reactions between small-molecule reagents and DNA molecules, where the conformations of the DNA molecules are formatted by the imposed nanoscale confinement. By using nanofabricated, nonabsorbing, low-background glass walls to confine biomolecules, our nanofluidic platform facilitates quantitative exploration of physiologically and biotechnologically relevant processes at the nanoscale. This device provides new kinetic information about dynamic chemical processes at the single-molecule level, using advancements in the CLiC design including a microchannel-based diffuser and postarray-based dialysis slit.

  9. Conservation implications of anthropogenic impacts on visual communication and camouflage.

    PubMed

    Delhey, Kaspar; Peters, Anne

    2017-02-01

    Anthropogenic environmental impacts can disrupt the sensory environment of animals and affect important processes from mate choice to predator avoidance. Currently, these effects are best understood for auditory and chemosensory modalities, and recent reviews highlight their importance for conservation. We examined how anthropogenic changes to the visual environment (ambient light, transmission, and backgrounds) affect visual communication and camouflage and considered the implications of these effects for conservation. Human changes to the visual environment can increase predation risk by affecting camouflage effectiveness, lead to maladaptive patterns of mate choice, and disrupt mutualistic interactions between pollinators and plants. Implications for conservation are particularly evident for disrupted camouflage due to its tight links with survival. The conservation importance of impaired visual communication is less documented. The effects of anthropogenic changes on visual communication and camouflage may be severe when they affect critical processes such as pollination or species recognition. However, when impaired mate choice does not lead to hybridization, the conservation consequences are less clear. We suggest that the demographic effects of human impacts on visual communication and camouflage will be particularly strong when human-induced modifications to the visual environment are evolutionarily novel (i.e., very different from natural variation); affected species and populations have low levels of intraspecific (genotypic and phenotypic) variation and behavioral, sensory, or physiological plasticity; and the processes affected are directly related to survival (camouflage), species recognition, or number of offspring produced, rather than offspring quality or attractiveness. Our findings suggest that anthropogenic effects on the visual environment may be of similar importance relative to conservation as anthropogenic effects on other sensory modalities. © 2016 Society for Conservation Biology.

  10. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  11. Objectively measured physical activity in Brazilians with visual impairment: description and associated factors.

    PubMed

    Barbosa Porcellis da Silva, Rafael; Marques, Alexandre Carriconde; Reichert, Felipe Fossati

    2017-05-19

    Low level of physical activity is a serious health issue in individuals with visual impairment. Few studies have objectively measured physical activity in this population group, particularly outside high-income countries. The aim of this study was to describe physical activity measured by accelerometry and its associated factors in Brazilian adults with visual impairment. In a cross-sectional design, 90 adults (18-95 years old) answered a questionnaire and wore an accelerometer for at least 3 days (including one weekend day) to measure physical activity (min/day). Sixty percent of the individuals practiced at least 30 min/day of moderate-to-vigorous physical activity. Individuals who were blind were less active, spent more time in sedentary activities and spent less time in moderate and vigorous activities than those with low vision. Individuals who walked mainly without any assistance were more active, spent less time in sedentary activities and spent more time in light and moderate activities than those who walked with a long cane or sighted guide. Our data highlight factors associated with lower levels of physical activity in people with visual impairment. These factors, such as being blind and walking without assistance should be tackled in interventions to increase physical activity levels among visual impairment individuals. Implications for Rehabilitation Physical inactivity worldwide is a serious health issue in people with visual impairments and specialized institutions and public policies must work to increase physical activity level of this population. Those with lower visual acuity and walking with any aid are at a higher risk of having low levels of physical activity. The association between visual response profile, living for less than 11 years with visual impairment and PA levels deserves further investigations Findings of the present study provide reliable data to support rehabilitation programs, observing the need of taking special attention to the subgroups that are even more likely to be inactive.

  12. Individual differences in working memory capacity and workload capacity.

    PubMed

    Yu, Ju-Chi; Chang, Ting-Yun; Yang, Cheng-Ta

    2014-01-01

    We investigated the relationship between working memory capacity (WMC) and workload capacity (WLC). Each participant performed an operation span (OSPAN) task to measure his/her WMC and three redundant-target detection tasks to measure his/her WLC. WLC was computed non-parametrically (Experiments 1 and 2) and parametrically (Experiment 2). Both levels of analyses showed that participants high in WMC had larger WLC than those low in WMC only when redundant information came from visual and auditory modalities, suggesting that high-WMC participants had superior processing capacity in dealing with redundant visual and auditory information. This difference was eliminated when multiple processes required processing for only a single working memory subsystem in a color-shape detection task and a double-dot detection task. These results highlighted the role of executive control in integrating and binding information from the two working memory subsystems for perceptual decision making.

  13. Evidence accumulation in obsessive-compulsive disorder: the role of uncertainty and monetary reward on perceptual decision-making thresholds.

    PubMed

    Banca, Paula; Vestergaard, Martin D; Rankov, Vladan; Baek, Kwangyeol; Mitchell, Simon; Lapa, Tatyana; Castelo-Branco, Miguel; Voon, Valerie

    2015-03-13

    The compulsive behaviour underlying obsessive-compulsive disorder (OCD) may be related to abnormalities in decision-making. The inability to commit to ultimate decisions, for example, patients unable to decide whether their hands are sufficiently clean, may reflect failures in accumulating sufficient evidence before a decision. Here we investigate the process of evidence accumulation in OCD in perceptual discrimination, hypothesizing enhanced evidence accumulation relative to healthy volunteers. Twenty-eight OCD patients and thirty-five controls were tested with a low-level visual perceptual task (random-dot-motion task, RDMT) and two response conflict control tasks. Regression analysis across different motion coherence levels and Hierarchical Drift Diffusion Modelling (HDDM) were used to characterize response strategies between groups in the RDMT. Patients required more evidence under high uncertainty perceptual contexts, as indexed by longer response time and higher decision boundaries. HDDM, which defines a decision when accumulated noisy evidence reaches a decision boundary, further showed slower drift rate towards the decision boundary reflecting poorer quality of evidence entering the decision process in patients under low uncertainty. With monetary incentives emphasizing speed and penalty for slower responses, patients decreased the decision thresholds relative to controls, accumulating less evidence in low uncertainty. These findings were unrelated to visual perceptual deficits and response conflict. This study provides evidence for impaired decision-formation processes in OCD, with a differential influence of high and low uncertainty contexts on evidence accumulation (decision threshold) and on the quality of evidence gathered (drift rates). It further emphasizes that OCD patients are sensitive to monetary incentives heightening speed in the speed-accuracy tradeoff, improving evidence accumulation.

  14. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  15. A hierarchy of timescales explains distinct effects of local inhibition of primary visual cortex and frontal eye fields

    PubMed Central

    Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B

    2016-01-01

    Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931

  16. A hierarchy of timescales explains distinct effects of local inhibition of primary visual cortex and frontal eye fields.

    PubMed

    Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B

    2016-09-06

    Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.

  17. Multivariate pattern analysis of MEG and EEG: A comparison of representational structure in time and space.

    PubMed

    Cichy, Radoslaw Martin; Pantazis, Dimitrios

    2017-09-01

    Multivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in low-level visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Parallel, exhaustive processing underlies logarithmic search functions: Visual search with cortical magnification.

    PubMed

    Wang, Zhiyuan; Lleras, Alejandro; Buetti, Simona

    2018-04-17

    Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target-distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target-distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.

  19. FAST: framework for heterogeneous medical image computing and visualization.

    PubMed

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  20. Steady-state visual evoked potentials as a research tool in social affective neuroscience

    PubMed Central

    Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas

    2017-01-01

    Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794

  1. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  2. Reduced Distractibility in a Remote Culture

    PubMed Central

    de Fockert, Jan W.; Caparos, Serge; Linnell, Karina J.; Davidoff, Jules

    2011-01-01

    Background In visual processing, there are marked cultural differences in the tendency to adopt either a global or local processing style. A remote culture (the Himba) has recently been reported to have a greater local bias in visual processing than Westerners. Here we give the first evidence that a greater, and remarkable, attentional selectivity provides the basis for this local bias. Methodology/Principal Findings In Experiment 1, Eriksen-type flanker interference was measured in the Himba and in Western controls. In both groups, responses to the direction of a task-relevant target arrow were affected by the compatibility of task-irrelevant distractor arrows. However, the Himba showed a marked reduction in overall flanker interference compared to Westerners. The smaller interference effect in the Himba occurred despite their overall slower performance than Westerners, and was evident even at a low level of perceptual load of the displays. In Experiment 2, the attentional selectivity of the Himba was further demonstrated by showing that their attention was not even captured by a moving singleton distractor. Conclusions/Significance We argue that the reduced distractibility in the Himba is clearly consistent with their tendency to prioritize the analysis of local details in visual processing. PMID:22046275

  3. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  4. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    PubMed

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  5. Figure–ground organization and the emergence of proto-objects in the visual cortex

    PubMed Central

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  6. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  7. A portable low-cost long-term live-cell imaging platform for biomedical research and education.

    PubMed

    Walzik, Maria P; Vollmar, Verena; Lachnit, Theresa; Dietz, Helmut; Haug, Susanne; Bachmann, Holger; Fath, Moritz; Aschenbrenner, Daniel; Abolpour Mofrad, Sepideh; Friedrich, Oliver; Gilbert, Daniel F

    2015-02-15

    Time-resolved visualization and analysis of slow dynamic processes in living cells has revolutionized many aspects of in vitro cellular studies. However, existing technology applied to time-resolved live-cell microscopy is often immobile, costly and requires a high level of skill to use and maintain. These factors limit its utility to field research and educational purposes. The recent availability of rapid prototyping technology makes it possible to quickly and easily engineer purpose-built alternatives to conventional research infrastructure which are low-cost and user-friendly. In this paper we describe the prototype of a fully automated low-cost, portable live-cell imaging system for time-resolved label-free visualization of dynamic processes in living cells. The device is light-weight (3.6 kg), small (22 × 22 × 22 cm) and extremely low-cost (<€1250). We demonstrate its potential for biomedical use by long-term imaging of recombinant HEK293 cells at varying culture conditions and validate its ability to generate time-resolved data of high quality allowing for analysis of time-dependent processes in living cells. While this work focuses on long-term imaging of mammalian cells, the presented technology could also be adapted for use with other biological specimen and provides a general example of rapidly prototyped low-cost biosensor technology for application in life sciences and education. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing

    ERIC Educational Resources Information Center

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-01-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…

  9. Manipulating Color and Other Visual Information Influences Picture Naming at Different Levels of Processing: Evidence from Alzheimer Subjects and Normal Controls

    ERIC Educational Resources Information Center

    Zannino, Gian Daniele; Perri, Roberta; Salamone, Giovanna; Di Lorenzo, Concetta; Caltagirone, Carlo; Carlesimo, Giovanni A.

    2010-01-01

    There is now a large body of evidence suggesting that color and photographic detail exert an effect on recognition of visually presented familiar objects. However, an unresolved issue is whether these factors act at the visual, the semantic or lexical level of the recognition process. In the present study, we investigated this issue by having…

  10. Educating the blind brain: a panorama of neural bases of vision and of training programs in organic neurovisual deficits

    PubMed Central

    Coubard, Olivier A.; Urbanski, Marika; Bourlon, Clémence; Gaumet, Marie

    2014-01-01

    Vision is a complex function, which is achieved by movements of the eyes to properly foveate targets at any location in 3D space and to continuously refresh neural information in the different visual pathways. The visual system involves five main routes originating in the retinas but varying in their destination within the brain: the occipital cortex, but also the superior colliculus (SC), the pretectum, the supra-chiasmatic nucleus, the nucleus of the optic tract and terminal dorsal, medial and lateral nuclei. Visual pathway architecture obeys systematization in sagittal and transversal planes so that visual information from left/right and upper/lower hemi-retinas, corresponding respectively to right/left and lower/upper visual fields, is processed ipsilaterally and ipsialtitudinally to hemi-retinas in left/right hemispheres and upper/lower fibers. Organic neurovisual deficits may occur at any level of this circuitry from the optic nerve to subcortical and cortical destinations, resulting in low or high-level visual deficits. In this didactic review article, we provide a panorama of the neural bases of eye movements and visual systems, and of related neurovisual deficits. Additionally, we briefly review the different schools of rehabilitation of organic neurovisual deficits, and show that whatever the emphasis is put on action or perception, benefits may be observed at both motor and perceptual levels. Given the extent of its neural bases in the brain, vision in its motor and perceptual aspects is also a useful tool to assess and modulate central nervous system (CNS) in general. PMID:25538575

  11. Simulating visibility under reduced acuity and contrast sensitivity.

    PubMed

    Thompson, William B; Legge, Gordon E; Kersten, Daniel J; Shakespeare, Robert A; Lei, Quan

    2017-04-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting-design communities. We validate the simulation using a letter-recognition task.

  12. Simulating Visibility Under Reduced Acuity and Contrast Sensitivity

    PubMed Central

    Thompson, William B.; Legge, Gordon E.; Kersten, Daniel J.; Shakespeare, Robert A.; Lei, Quan

    2017-01-01

    Architects and lighting designers have difficulty designing spaces that are accessible to those with low vision, since the complex nature of most architectural spaces requires a site-specific analysis of the visibility of mobility hazards and key landmarks needed for navigation. We describe a method that can be utilized in the architectural design process for simulating the effects of reduced acuity and contrast on visibility. The key contribution is the development of a way to parameterize the simulation using standard clinical measures of acuity and contrast sensitivity. While these measures are known to be imperfect predictors of visual function, they provide a way of characterizing general levels of visual performance that is familiar to both those working in low vision and our target end-users in the architectural and lighting design communities. We validate the simulation using a letter recognition task. PMID:28375328

  13. Vision improvement in pilots with presbyopia following perceptual learning.

    PubMed

    Sterkin, Anna; Levy, Yuval; Pokroy, Russell; Lev, Maria; Levian, Liora; Doron, Ravid; Yehezkel, Oren; Fried, Moshe; Frenkel-Nir, Yael; Gordon, Barak; Polat, Uri

    2017-11-24

    Israeli Air Force (IAF) pilots continue flying combat missions after the symptoms of natural near-vision deterioration, termed presbyopia, begin to be noticeable. Because modern pilots rely on the displays of the aircraft control and performance instruments, near visual acuity (VA) is essential in the cockpit. We aimed to apply a method previously shown to improve visual performance of presbyopes, and test whether presbyopic IAF pilots can overcome the limitation imposed by presbyopia. Participants were selected by the IAF aeromedical unit as having at least initial presbyopia and trained using a structured personalized perceptual learning method (GlassesOff application), based on detecting briefly presented low-contrast Gabor stimuli, under the conditions of spatial and temporal constraints, from a distance of 40 cm. Our results show that despite their initial visual advantage over age-matched peers, training resulted in robust improvements in various basic visual functions, including static and temporal VA, stereoacuity, spatial crowding, contrast sensitivity and contrast discrimination. Moreover, improvements generalized to higher-level tasks, such as sentence reading and aerial photography interpretation (specifically designed to reflect IAF pilots' expertise in analyzing noisy low-contrast input). In concert with earlier suggestions, gains in visual processing speed are plausible to account, at least partially, for the observed training-induced improvements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Visualization of DNA in highly processed botanical materials.

    PubMed

    Lu, Zhengfei; Rubinsky, Maria; Babajanian, Silva; Zhang, Yanjun; Chang, Peter; Swanson, Gary

    2018-04-15

    DNA-based methods have been gaining recognition as a tool for botanical authentication in herbal medicine; however, their application in processed botanical materials is challenging due to the low quality and quantity of DNA left after extensive manufacturing processes. The low amount of DNA recovered from processed materials, especially extracts, is "invisible" by current technology, which has casted doubt on the presence of amplifiable botanical DNA. A method using adapter-ligation and PCR amplification was successfully applied to visualize the "invisible" DNA in botanical extracts. The size of the "invisible" DNA fragments in botanical extracts was around 20-220 bp compared to fragments of around 600 bp for the more easily visualized DNA in botanical powders. This technique is the first to allow characterization and visualization of small fragments of DNA in processed botanical materials and will provide key information to guide the development of appropriate DNA-based botanical authentication methods in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Coherence and interlimb force control: Effects of visual gain.

    PubMed

    Kang, Nyeonju; Cauraugh, James H

    2018-03-06

    Neural coupling across hemispheres and homologous muscles often appears during bimanual motor control. Force coupling in a specific frequency domain may indicate specific bimanual force coordination patterns. This study investigated coherence on pairs of bimanual isometric index finger force while manipulating visual gain and task asymmetry conditions. We used two visual gain conditions (low and high gain = 8 and 512 pixels/N), and created task asymmetry by manipulating coefficient ratios imposed on the left and right index finger forces (0.4:1.6; 1:1; 1.6:0.4, respectively). Unequal coefficient ratios required different contributions from each hand to the bimanual force task resulting in force asymmetry. Fourteen healthy young adults performed bimanual isometric force control at 20% of their maximal level of the summed force of both fingers. We quantified peak coherence and relative phase angle between hands at 0-4, 4-8, and 8-12 Hz, and estimated a signal-to-noise ratio of bimanual forces. The findings revealed higher peak coherence and relative phase angle at 0-4 Hz than at 4-8 and 8-12 Hz for both visual gain conditions. Further, peak coherence and relative phase angle values at 0-4 Hz were larger at the high gain than at the low gain. At the high gain, higher peak coherence at 0-4 Hz collapsed across task asymmetry conditions significantly predicted greater signal-to-noise ratio. These findings indicate that a greater level of visual information facilitates bimanual force coupling at a specific frequency range related to sensorimotor processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  17. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  18. Clinically Meaningful Rehabilitation Outcomes of Low Vision Patients Served by Outpatient Clinical Centers.

    PubMed

    Goldstein, Judith E; Jackson, Mary Lou; Fox, Sandra M; Deremeik, James T; Massof, Robert W

    2015-07-01

    To facilitate comparative clinical outcome research in low vision rehabilitation, we must use patient-centered measurements that reflect clinically meaningful changes in visual ability. To quantify the effects of currently provided low vision rehabilitation (LVR) on patients who present for outpatient LVR services in the United States. Prospective, observational study of new patients seeking outpatient LVR services. From April 2008 through May 2011, 779 patients from 28 clinical centers in the United States were enrolled in the Low Vision Rehabilitation Outcomes Study. The Activity Inventory, a visual function questionnaire, was administered to measure overall visual ability and visual ability in 4 functional domains (reading, mobility, visual motor function, and visual information processing) at baseline and 6 to 9 months after usual LVR care. The Geriatric Depression Scale, Telephone Interview for Cognitive Status, and Medical Outcomes Study 36-Item Short-Form Health Survey physical functioning questionnaires were also administered to measure patients' psychological, cognitive, and physical health states, respectively, and clinical findings of patients were provided by study centers. Mean changes in the study population and minimum clinically important differences in the individual in overall visual ability and in visual ability in 4 functional domains as measured by the Activity Inventory. Baseline and post-rehabilitation measures were obtained for 468 patients. Minimum clinically important differences (95% CIs) were observed in nearly half (47% [95% CI, 44%-50%]) of patients in overall visual ability. The prevalence rates of patients with minimum clinically important differences in visual ability in functional domains were reading (44% [95% CI, 42%-48%]), visual motor function (38% [95% CI, 36%-42%]), visual information processing (33% [95% CI, 31%-37%]), and mobility (27% [95% CI, 25%-31%]). The largest average effect size (Cohen d = 0.87) for the population was observed in overall visual ability. Age (P = .006) was an independent predictor of changes in overall visual ability, and logMAR visual acuity (P = .002) was predictive of changes in visual information processing. Forty-four to fifty percent of patients presenting for outpatient LVR show clinically meaningful differences in overall visual ability after LVR, and the average effect sizes in overall visual ability are large, close to 1 SD.

  19. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    PubMed

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Chromatic Perceptual Learning but No Category Effects without Linguistic Input

    PubMed Central

    Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669

  1. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  2. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    PubMed

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  4. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  6. Perceptual load corresponds with factors known to influence visual search

    PubMed Central

    Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.

    2014-01-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258

  7. Dissociable Modulation of Overt Visual Attention in Valence and Arousal Revealed by Topology of Scan Path

    PubMed Central

    Ni, Jianguang; Jiang, Huihui; Jin, Yixiang; Chen, Nanhui; Wang, Jianhong; Wang, Zhengbo; Luo, Yuejia; Ma, Yuanye; Hu, Xintian

    2011-01-01

    Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal. PMID:21494331

  8. Age effects on visual-perceptual processing and confrontation naming.

    PubMed

    Gutherie, Audrey H; Seely, Peter W; Beacham, Lauren A; Schuchard, Ronald A; De l'Aune, William A; Moore, Anna Bacon

    2010-03-01

    The impact of age-related changes in visual-perceptual processing on naming ability has not been reported. The present study investigated the effects of 6 levels of spatial frequency and 6 levels of contrast on accuracy and latency to name objects in 14 young and 13 older neurologically normal adults with intact lexical-semantic functioning. Spatial frequency and contrast manipulations were made independently. Consistent with the hypotheses, variations in these two visual parameters impact naming ability in young and older subjects differently. The results from the spatial frequency-manipulations revealed that, in general, young vs. older subjects are faster and more accurate to name. However, this age-related difference is dependent on the spatial frequency on the image; differences were only seen for images presented at low (e.g., 0.25-1 c/deg) or high (e.g., 8-16 c/deg) spatial frequencies. Contrary to predictions, the results from the contrast manipulations revealed that overall older vs. young adults are more accurate to name. Again, however, differences were only seen for images presented at the lower levels of contrast (i.e., 1.25%). Both age groups had shorter latencies on the second exposure of the contrast-manipulated images, but this possible advantage of exposure was not seen for spatial frequency. Category analyses conducted on the data from this study indicate that older vs. young adults exhibit a stronger nonliving-object advantage for naming spatial frequency-manipulated images. Moreover, the findings suggest that bottom-up visual-perceptual variables integrate with top-down category information in different ways. Potential implications on the aging and naming (and recognition) literature are discussed.

  9. Visual arts training is linked to flexible attention to local and global levels of visual stimuli.

    PubMed

    Chamberlain, Rebecca; Wagemans, Johan

    2015-10-01

    Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection

    PubMed Central

    Denison, Rachel N.; Silver, Michael A.

    2014-01-01

    During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685

  11. Maternal play behaviors, child negativity, and preterm or low birthweight toddlers' visual-spatial outcomes: testing a differential susceptibility hypothesis.

    PubMed

    Dilworth-Bart, Janean E; Miller, Kyle E; Hane, Amanda

    2012-04-01

    We examined the joint roles of child negative emotionality and parenting in the visual-spatial development of toddlers born preterm or with low birthweights (PTLBW). Neonatal risk data were collected at hospital discharge, observer- and parent-rated child negative emotionality was assessed at 9-months postterm, and mother-initiated task changes and flexibility during play were observed during a dyadic play interaction at 16-months postterm. Abbreviated IQ scores, and verbal/nonverbal and visual-spatial processing data were collected at 24-months postterm. Hierarchical regression analyses did not support our hypothesis that the visual-spatial processing of PTLBW toddlers with higher negative emotionality would be differentially susceptible to parenting behaviors during play. Instead, observer-rated distress and a negativity composite score were associated with less optimal visual-spatial processing when mothers were more flexible during the 16-month play interaction. Mother-initiated task changes did not interact with any of the negative emotionality variables to predict any of the 24-month neurocognitive outcomes, nor did maternal flexibility interact with mother-rated difficult temperament to predict the visual-spatial processing outcomes. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    PubMed

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  13. Dissociable Roles of Different Types of Working Memory Load in Visual Detection

    PubMed Central

    Konstantinou, Nikos; Lavie, Nilli

    2013-01-01

    We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection. PMID:23713796

  14. What you see is what you expect: rapid scene understanding benefits from prior experience.

    PubMed

    Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li

    2015-05-01

    Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.

  15. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging

    PubMed Central

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-01-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629

  16. Neuronal nonlinearity explains greater visual spatial resolution for darks than lights.

    PubMed

    Kremkow, Jens; Jin, Jianzhong; Komban, Stanley J; Wang, Yushi; Lashgari, Reza; Li, Xiaobing; Jansen, Michael; Zaidi, Qasim; Alonso, Jose-Manuel

    2014-02-25

    Astronomers and physicists noticed centuries ago that visual spatial resolution is higher for dark than light stimuli, but the neuronal mechanisms for this perceptual asymmetry remain unknown. Here we demonstrate that the asymmetry is caused by a neuronal nonlinearity in the early visual pathway. We show that neurons driven by darks (OFF neurons) increase their responses roughly linearly with luminance decrements, independent of the background luminance. However, neurons driven by lights (ON neurons) saturate their responses with small increases in luminance and need bright backgrounds to approach the linearity of OFF neurons. We show that, as a consequence of this difference in linearity, receptive fields are larger in ON than OFF thalamic neurons, and cortical neurons are more strongly driven by darks than lights at low spatial frequencies. This ON/OFF asymmetry in linearity could be demonstrated in the visual cortex of cats, monkeys, and humans and in the cat visual thalamus. Furthermore, in the cat visual thalamus, we show that the neuronal nonlinearity is present at the ON receptive field center of ON-center neurons and ON receptive field surround of OFF-center neurons, suggesting an origin at the level of the photoreceptor. These results demonstrate a fundamental difference in visual processing between ON and OFF channels and reveal a competitive advantage for OFF neurons over ON neurons at low spatial frequencies, which could be important during cortical development when retinal images are blurred by immature optics in infant eyes.

  17. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Synaptic Correlates of Low-Level Perception in V1.

    PubMed

    Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves

    2016-04-06

    The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed to require attention-related processes, such as top-down feedback from higher cortical areas. Using intracellular techniques in the anesthetized cat and novel analysis methods, we reveal unexpected structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. These structural-functional biases provide a substrate, within V1, for contour detection and, more unexpectedly, global motion flow sensitivity at saccadic speed, even in the absence of attentional processes. We argue for the concept of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy changes with retinal flow statistics, and more generally for a renewed focus on intracortical computation. Copyright © 2016 the authors 0270-6474/16/363925-18$15.00/0.

  19. Prevalence and causes of low vision and blindness in Baotou

    PubMed Central

    Zhang, Guisen; Li, Yan; Teng, Xuelong; Wu, Qiang; Gong, Hui; Ren, Fengmei; Guo, Yuxia; Liu, Lei; Zhang, Han

    2016-01-01

    Abstract The aim of this study was to investigate the prevalence and causes of low vision and blindness in Baotou, Inner Mongolia. A cross-sectional study was carried out. Multistage sampling was used to select samples. The visual acuity was estimated using LogMAR and corrected by pinhole as best-corrected visual acuity. There were 7000 samples selected and 5770 subjects included in this investigation. The overall bilateral prevalence rates of low vision and blindness were 3.66% (95% CI: 3.17–4.14) and 0.99% (95% CI: 0.73–1.24), respectively. The prevalence of bilateral low vision, blindness, and visual impairment increased with age and decreased with education level. The main leading cause of low vision and blindness was cataract. Diabetic retinopathy and age-related macular degeneration were found to be the second leading causes of blindness in Baotou. The low vision and blindness were more prevalent in elderly people and subjects with low education level in Baotou. Cataract was the main cause for visual impairment and more attention should be paid to fundus diseases. In order to prevent blindness, much more eye care programs should be established. PMID:27631267

  20. Prevalence and causes of low vision and blindness in Baotou: A cross-sectional study.

    PubMed

    Zhang, Guisen; Li, Yan; Teng, Xuelong; Wu, Qiang; Gong, Hui; Ren, Fengmei; Guo, Yuxia; Liu, Lei; Zhang, Han

    2016-09-01

    The aim of this study was to investigate the prevalence and causes of low vision and blindness in Baotou, Inner Mongolia.A cross-sectional study was carried out. Multistage sampling was used to select samples. The visual acuity was estimated using LogMAR and corrected by pinhole as best-corrected visual acuity.There were 7000 samples selected and 5770 subjects included in this investigation. The overall bilateral prevalence rates of low vision and blindness were 3.66% (95% CI: 3.17-4.14) and 0.99% (95% CI: 0.73-1.24), respectively. The prevalence of bilateral low vision, blindness, and visual impairment increased with age and decreased with education level. The main leading cause of low vision and blindness was cataract. Diabetic retinopathy and age-related macular degeneration were found to be the second leading causes of blindness in Baotou.The low vision and blindness were more prevalent in elderly people and subjects with low education level in Baotou. Cataract was the main cause for visual impairment and more attention should be paid to fundus diseases. In order to prevent blindness, much more eye care programs should be established.

  1. The visual information system

    Treesearch

    Merlyn J. Paulson

    1979-01-01

    This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...

  2. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

    PubMed

    Towal, R Blythe; Mormann, Milica; Koch, Christof

    2013-10-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

  3. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice

    PubMed Central

    Towal, R. Blythe; Mormann, Milica; Koch, Christof

    2013-01-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496

  4. Human Mobility Monitoring in Very Low Resolution Visual Sensor Network

    PubMed Central

    Bo Bo, Nyan; Deboeverie, Francis; Eldib, Mohamed; Guan, Junzhi; Xie, Xingzhe; Niño, Jorge; Van Haerenborgh, Dirk; Slembrouck, Maarten; Van de Velde, Samuel; Steendam, Heidi; Veelaert, Peter; Kleihorst, Richard; Aghajan, Hamid; Philips, Wilfried

    2014-01-01

    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics. PMID:25375754

  5. Skilled deaf readers have an enhanced perceptual span in reading.

    PubMed

    Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith

    2012-07-01

    Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.

  6. Creating visual explanations improves learning.

    PubMed

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  7. Parafoveal and foveal processing of abbreviations during eye fixations in reading: Making a case for case

    PubMed Central

    Slattery, Timothy J.; Schotter, Elizabeth R.; Berry, Raymond W.; Rayner, Keith

    2011-01-01

    The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of two distinct types: Acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence rules, such as NASA) and initialisms (abbreviations in which the grapheme-phoneme correspondences are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading, and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lower case letters. PMID:21480754

  8. Parafoveal and foveal processing of abbreviations during eye fixations in reading: making a case for case.

    PubMed

    Slattery, Timothy J; Schotter, Elizabeth R; Berry, Raymond W; Rayner, Keith

    2011-07-01

    The processing of abbreviations in reading was examined with an eye movement experiment. Abbreviations were of 2 distinct types: acronyms (abbreviations that can be read with the normal grapheme-phoneme correspondence [GPC] rules, such as NASA) and initialisms (abbreviations in which the GPCs are letter names, such as NCAA). Parafoveal and foveal processing of these abbreviations was assessed with the use of the boundary change paradigm (K. Rayner, 1975). Using this paradigm, previews of the abbreviations were either identical to the abbreviation (NASA or NCAA), orthographically legal (NUSO or NOBA), or illegal (NRSB or NRBA). The abbreviations were presented as capital letter strings within normal, predominantly lowercase sentences and also sentences in all capital letters such that the abbreviations would not be visually distinct. The results indicate that acronyms and initialisms undergo different processing during reading and that readers can modulate their processing based on low-level visual cues (distinct capitalization) in parafoveal vision. In particular, readers may be biased to process capitalized letter strings as initialisms in parafoveal vision when the rest of the sentence is normal, lowercase letters.

  9. The forest, the trees, and the leaves: Differences of processing across development.

    PubMed

    Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier

    2016-08-01

    To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed the global figure) that was presented to 112 participants aged 5, 6, 9, and 21 years old. This task allowed us to explore (a) which level is perceptively the most salient at each age (i.e., the fastest detected level) and (b) what kind of attentional processing occurs for each level across development (i.e., efficient processing: detection time does not increase with the number of stimuli on the display; less efficient processing: detection time increases linearly with the growing number of distractors). Results showed that the global level was the most salient at 5 years of age, whereas the global and intermediate levels were both salient for 9-year-olds and adults. Interestingly, at 6 years of age, the intermediate level was the most salient level. Second, all participants showed an efficient processing of both intermediate and global levels of hierarchical stimuli, and a less efficient processing of the local level, suggesting a local disadvantage rather than a global advantage in visual search. The cognitive cost for selecting the local target was higher for 5- and 6-year-old children compared to 9-year-old children and adults. These results are discussed with regards to the development of executive control. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Synergic effects of 10°/s constant rotation and rotating background on visual cognitive processing

    NASA Astrophysics Data System (ADS)

    He, Siyang; Cao, Yi; Zhao, Qi; Tan, Cheng; Niu, Dongbin

    In previous studies we have found that constant low-speed rotation facilitated the auditory cognitive process and constant velocity rotation background sped up the perception, recognition and assessment process of visual stimuli. In the condition of constant low-speed rotation body is exposed into a new physical state. In this study the variations of human brain's cognitive process under the complex condition of constant low-speed rotation and visual rotation backgrounds with different speed were explored. 14 university students participated in the ex-periment. EEG signals were recorded when they were performing three different cognitive tasks with increasing mental load, that is no response task, selective switch responses task and selec-tive mental arithmetic task. Rotary chair was used to create constant low-speed10/srotation. Four kinds of background were used in this experiment, they were normal black background and constant 30o /s, 45o /s or 60o /s rotating simulated star background. The P1 and N1 compo-nents of brain event-related potentials (ERP) were analyzed to detect the early visual cognitive processing changes. It was found that compared with task performed under other backgrounds, the posterior P1 and N1 latencies were shortened under 45o /s rotating background in all kinds of cognitive tasks. In the no response task, compared with task performed under black back-ground, the posterior N1 latencies were delayed under 30o /s rotating background. In the selec-tive switch responses task and selective mental arithmetic task, compared with task performed under other background, the P1 latencies were lengthened under 60o /s rotating background, but the average amplitudes of the posterior P1 and N1 were increased. It was suggested that under constant 10/s rotation, the facilitated effect of rotating visual background were changed to an inhibited one in 30o /s rotating background. Under vestibular new environment, not all of the rotating backgrounds accelerated the early process of visual cognition. There is a synergic effect between the effects of constant low-speed rotation and rotating speed of the background. Under certain conditions, they both served to facilitate the visual cognitive processing, and it had been started at the stage when extrastriate cortex perceiving the visual signal. Under the condition of constant low-speed rotation in higher cognitive load tasks, the rapid rotation of the background enhanced the magnitude of the signal transmission in the visual path, making signal to noise ratio increased and a higher signal to noise ratio is clearly in favor of target perception and recognition. This gave rise to the hypothesis that higher cognitive load tasks with higher top-down control had more power in counteracting the inhibition effect of higher velocity rotation background. Acknowledgements: This project was supported by National Natural Science Foundation of China (No. 30670715) and National High Technology Research and Development Program of China (No.2007AA04Z254).

  11. Visualizing speciation in artificial cichlid fish.

    PubMed

    Clement, Ross

    2006-01-01

    The Cichlid Speciation Project (CSP) is an ALife simulation system for investigating open problems in the speciation of African cichlid fish. The CSP can be used to perform a wide range of experiments that show that speciation is a natural consequence of certain biological systems. A visualization system capable of extracting the history of speciation from low-level trace data and creating a phylogenetic tree has been implemented. Unlike previous approaches, this visualization system presents a concrete trace of speciation, rather than a summary of low-level information from which the viewer can make subjective decisions on how speciation progressed. The phylogenetic trees are a more objective visualization of speciation, and enable automated collection and summarization of the results of experiments. The visualization system is used to create a phylogenetic tree from an experiment that models sympatric speciation.

  12. The Timing of Visual Object Categorization

    PubMed Central

    Mack, Michael L.; Palmeri, Thomas J.

    2011-01-01

    An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480

  13. EEG Topographic Mapping of Visual and Kinesthetic Imagery in Swimmers.

    PubMed

    Wilson, V E; Dikman, Z; Bird, E I; Williams, J M; Harmison, R; Shaw-Thornton, L; Schwartz, G E

    2016-03-01

    This study investigated differences in QEEG measures between kinesthetic and visual imagery of a 100-m swim in 36 elite competitive swimmers. Background information and post-trial checks controlled for the modality of imagery, swimming skill level, preferred imagery style, intensity of image and task equality. Measures of EEG relative magnitude in theta, low (7-9 Hz) and high alpha (8-10 Hz), and low and high beta were taken from 19 scalp sites during baseline, visual, and kinesthetic imagery. QEEG magnitudes in the low alpha band during the visual and kinesthetic conditions were attenuated from baseline in low band alpha but no changes were seen in any other bands. Swimmers produced more low alpha EEG magnitude during visual versus kinesthetic imagery. This was interpreted as the swimmers having a greater efficiency at producing visual imagery. Participants who reported a strong intensity versus a weaker feeling of the image (kinesthetic) had less low alpha magnitude, i.e., there was use of more cortical resources, but not for the visual condition. These data suggest that low band (7-9 Hz) alpha distinguishes imagery modalities from baseline, visual imagery requires less cortical resources than kinesthetic imagery, and that intense feelings of swimming requires more brain activity than less intense feelings.

  14. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory

    PubMed Central

    Murty, Vishnu P.; Tompary, Alexa; Adcock, R. Alison

    2017-01-01

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. SIGNIFICANCE STATEMENT Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. PMID:28100737

  15. Multi-modal distraction: insights from children's limited attention.

    PubMed

    Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia

    2015-03-01

    How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  17. PLIF Study of Mars Science Laboratory Capsule Reaction Control System Jets

    NASA Technical Reports Server (NTRS)

    Johansen, C. T.; Danehy, P. M.; Ashcraft, S. W.; Bathel, B. F.; Inman, J. A.; Jones, S. B.

    2011-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to visualize the flow in the wake of a Mars Science Lab (MSL) entry capsule with activated reaction control system (RCS) jets in NASA Langley Research Center s 31-Inch Mach 10 Air Tunnel facility. Images were processed using the Virtual Diagnostics Interface (ViDI) method, which brings out the three-dimensional nature of the flow visualization data while showing the relative location of the data with respect to the model. Comparison of wind-on and wind-off results illustrates the effect that the hypersonic crossflow has on the trajectory and structure of individual RCS jets. The visualization and comparison of both single and multiple activated RCS jets indicate low levels of jet-jet interaction. Quantitative streamwise velocity was also obtained via NO PLIF molecular tagging velocimetry (MTV).

  18. PLIF Study of Mars Science Laboratory Capsule Reaction Control System Jets

    NASA Technical Reports Server (NTRS)

    Johansen, C. T.; Danehy, P. M.; Ashcraft, S. W.; Bathel, B. F.; Inman, J. A.; Jones, S. B.

    2011-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to visualize the flow in the wake of a Mars Science Lab (MSL) entry capsule with activated reaction control system (RCS) jets in NASA Langley Research Center's 31-Inch Mach 10 Air Tunnel facility. Images were processed using the Virtual Diagnostics Interface (ViDI) method, which brings out the three-dimensional nature of the flow visualization data while showing the relative location of the data with respect to the model. Comparison of wind-on and wind-off results illustrates the effect that the hypersonic crossflow has on the trajectory and structure of individual RCS jets. The visualization and comparison of both single and multiple activated RCS jets indicate low levels of jet-jet interaction. Quantitative streamwise velocity was also obtained via NO PLIF molecular tagging velocimetry (MTV).

  19. Age-related macular degeneration changes the processing of visual scenes in the brain.

    PubMed

    Ramanoël, Stephen; Chokron, Sylvie; Hera, Ruxandra; Kauffmann, Louise; Chiquet, Christophe; Krainik, Alexandre; Peyrin, Carole

    2018-01-01

    In age-related macular degeneration (AMD), the processing of fine details in a visual scene, based on a high spatial frequency processing, is impaired, while the processing of global shapes, based on a low spatial frequency processing, is relatively well preserved. The present fMRI study aimed to investigate the residual abilities and functional brain changes of spatial frequency processing in visual scenes in AMD patients. AMD patients and normally sighted elderly participants performed a categorization task using large black and white photographs of scenes (indoors vs. outdoors) filtered in low and high spatial frequencies, and nonfiltered. The study also explored the effect of luminance contrast on the processing of high spatial frequencies. The contrast across scenes was either unmodified or equalized using a root-mean-square contrast normalization in order to increase contrast in high-pass filtered scenes. Performance was lower for high-pass filtered scenes than for low-pass and nonfiltered scenes, for both AMD patients and controls. The deficit for processing high spatial frequencies was more pronounced in AMD patients than in controls and was associated with lower activity for patients than controls not only in the occipital areas dedicated to central and peripheral visual fields but also in a distant cerebral region specialized for scene perception, the parahippocampal place area. Increasing the contrast improved the processing of high spatial frequency content and spurred activation of the occipital cortex for AMD patients. These findings may lead to new perspectives for rehabilitation procedures for AMD patients.

  20. Low-Pressure Burst-Mode Focused Ultrasound Wave Reconstruction and Mapping for Blood-Brain Barrier Opening: A Preclinical Examination

    PubMed Central

    Xia, Jingjing; Tsui, Po-Hsiang; Liu, Hao-Li

    2016-01-01

    Burst-mode focused ultrasound (FUS) exposure has been shown to induce transient blood-brain barrier (BBB) opening for potential CNS drug delivery. FUS-BBB opening requires imaging guidance during the intervention, yet current imaging technology only enables postoperative outcome confirmation. In this study, we propose an approach to visualize short-burst low-pressure focal beam distribution that allows to be applied in FUS-BBB opening intervention on small animals. A backscattered acoustic-wave reconstruction method based on synchronization among focused ultrasound emission, diagnostic ultrasound receiving and passively beamformed processing were developed. We observed that focal beam could be successfully visualized for in vitro FUS exposure with 0.5–2 MHz without involvement of microbubbles. The detectable level of FUS exposure was 0.467 MPa in pressure and 0.05 ms in burst length. The signal intensity (SI) of the reconstructions was linearly correlated with the FUS exposure level both in-vitro (r2 = 0.9878) and in-vivo (r2 = 0.9943), and SI level of the reconstructed focal beam also correlated with the success and level of BBB-opening. The proposed approach provides a feasible way to perform real-time and closed-loop control of FUS-based brain drug delivery. PMID:27295608

  1. Low-Pressure Burst-Mode Focused Ultrasound Wave Reconstruction and Mapping for Blood-Brain Barrier Opening: A Preclinical Examination

    NASA Astrophysics Data System (ADS)

    Xia, Jingjing; Tsui, Po-Hsiang; Liu, Hao-Li

    2016-06-01

    Burst-mode focused ultrasound (FUS) exposure has been shown to induce transient blood-brain barrier (BBB) opening for potential CNS drug delivery. FUS-BBB opening requires imaging guidance during the intervention, yet current imaging technology only enables postoperative outcome confirmation. In this study, we propose an approach to visualize short-burst low-pressure focal beam distribution that allows to be applied in FUS-BBB opening intervention on small animals. A backscattered acoustic-wave reconstruction method based on synchronization among focused ultrasound emission, diagnostic ultrasound receiving and passively beamformed processing were developed. We observed that focal beam could be successfully visualized for in vitro FUS exposure with 0.5-2 MHz without involvement of microbubbles. The detectable level of FUS exposure was 0.467 MPa in pressure and 0.05 ms in burst length. The signal intensity (SI) of the reconstructions was linearly correlated with the FUS exposure level both in-vitro (r2 = 0.9878) and in-vivo (r2 = 0.9943), and SI level of the reconstructed focal beam also correlated with the success and level of BBB-opening. The proposed approach provides a feasible way to perform real-time and closed-loop control of FUS-based brain drug delivery.

  2. Planning of visually guided reach-to-grasp movements: inference from reaction time and contingent negative variation (CNV).

    PubMed

    Zaepffel, Manuel; Brochier, Thomas

    2012-01-01

    We performed electroencephalogram (EEG) recording in a precuing task to investigate the planning processes of reach-to-grasp movements in human. In this reaction time (RT) task, subjects had to reach, grasp, and pull an object as fast as possible after a visual GO signal. We manipulated two parameters: the hand shape for grasping (precision grip or side grip) and the force required to pull the object (high or low). Three seconds before the GO onset, a cue provided advance information about force, grip, both parameters, or no information at all. EEG data show that reach-to-grasp movements generate differences in the topographic distribution of the late Contingent Negative Variation (ICNV) amplitude between the 4 precuing conditions. Along with RT data, it confirms that two distinct functional networks are involved with different time courses in the planning of grip and force. Finally, we outline the composite nature of the lCNV that might reflect both high- and low-level planning processes. Copyright © 2011 Society for Psychophysiological Research.

  3. New techniques for experimental generation of two-dimensional blade-vortex interaction at low Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Booth, E., Jr.; Yu, J. C.

    1986-01-01

    An experimental investigation of two dimensional blade vortex interaction was held at NASA Langley Research Center. The first phase was a flow visualization study to document the approach process of a two dimensional vortex as it encountered a loaded blade model. To accomplish the flow visualization study, a method for generating two dimensional vortex filaments was required. The numerical study used to define a new vortex generation process and the use of this process in the flow visualization study were documented. Additionally, photographic techniques and data analysis methods used in the flow visualization study are examined.

  4. Braille Reading Accuracy of Students Who Are Visually Impaired: The Effects of Gender, Age at Vision Loss, and Level of Education

    ERIC Educational Resources Information Center

    Argyropoulos, Vassilis; Papadimitriou, Vassilios

    2015-01-01

    Introduction: The present study assesses the performance of students who are visually impaired (that is, those who are blind or have low vision) in braille reading accuracy and examines potential correlations among the error categories on the basis of gender, age at loss of vision, and level of education. Methods: Twenty-one visually impaired…

  5. Recovery from Object Substitution Masking Induced by Transient Suppression of Visual Motion Processing: A Repetitive Transcranial Magnetic Stimulation Study

    ERIC Educational Resources Information Center

    Hirose, Nobuyuki; Kihara, Ken; Mima, Tatsuya; Ueki, Yoshino; Fukuyama, Hidenao; Osaka, Naoyuki

    2007-01-01

    Object substitution masking is a form of visual backward masking in which a briefly presented target is rendered invisible by a lingering mask that is too sparse to produce lower image-level interference. Recent studies suggested the importance of an updating process in a higher object-level representation, which should rely on the processing of…

  6. Flexible Visual Processing in Young Adults with Autism: The Effects of Implicit Learning on a Global-Local Task

    ERIC Educational Resources Information Center

    Hayward, Dana A.; Shore, David I.; Ristic, Jelena; Kovshoff, Hanna; Iarocci, Grace; Mottron, Laurent; Burack, Jacob A.

    2012-01-01

    We utilized a hierarchical figures task to determine the default level of perceptual processing and the flexibility of visual processing in a group of high-functioning young adults with autism (n = 12) and a typically developing young adults, matched by chronological age and IQ (n = 12). In one task, participants attended to one level of the…

  7. Dissociable neural correlates of contour completion and contour representation in illusory contour perception.

    PubMed

    Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren

    2012-10-01

    Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.

  8. Neurons in the human hippocampus and amygdala respond to both low- and high-level image properties

    PubMed Central

    Cabrales, Elaine; Wilson, Michael S.; Baker, Christopher P.; Thorp, Christopher K.; Smith, Kris A.; Treiman, David M.

    2011-01-01

    A large number of studies have demonstrated that structures within the medial temporal lobe, such as the hippocampus, are intimately involved in declarative memory for objects and people. Although these items are abstractions of the visual scene, specific visual details can change the speed and accuracy of their recall. By recording from 415 neurons in the hippocampus and amygdala of human epilepsy patients as they viewed images drawn from 10 image categories, we showed that the firing rates of 8% of these neurons encode image illuminance and contrast, low-level properties not directly pertinent to task performance, whereas in 7% of the neurons, firing rates encode the category of the item depicted in the image, a high-level property pertinent to the task. This simultaneous representation of high- and low-level image properties within the same brain areas may serve to bind separate aspects of visual objects into a coherent percept and allow episodic details of objects to influence mnemonic performance. PMID:21471400

  9. The Pivotal Role of the Right Parietal Lobe in Temporal Attention.

    PubMed

    Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella

    2017-05-01

    The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.

  10. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324

  11. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  12. Developmental dyslexia: exploring how much phonological and visual attention span disorders are linked to simultaneous auditory processing deficits.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-07-01

    The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were assessed in the dyslexic children. We presented the dyslexic children with a phonological short-term memory task and a phonemic awareness task to quantify their phonological skills. Visual attention spans correlated positively with individual scores obtained on the dichotic listening task while phonological skills did not correlate with either dichotic scores or visual attention span measures. Moreover, all the dyslexic children with a dichotic listening deficit showed a simultaneous visual processing deficit, and a substantial number of dyslexic children exhibited phonological processing deficits whether or not they exhibited low dichotic listening scores. These findings suggest that processing simultaneous auditory stimuli may be impaired in dyslexic children regardless of phonological processing difficulties and be linked to similar problems in the visual modality.

  13. Streaming Visual Analytics Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Burtner, Edwin R.; Kritzstein, Brian P.

    How can we best enable users to understand complex emerging events and make appropriate assessments from streaming data? This was the central question addressed at a three-day workshop on streaming visual analytics. This workshop was organized by Pacific Northwest National Laboratory for a government sponsor. It brought together forty researchers and subject matter experts from government, industry, and academia. This report summarizes the outcomes from that workshop. It describes elements of the vision for a streaming visual analytic environment and set of important research directions needed to achieve this vision. Streaming data analysis is in many ways the analysis andmore » understanding of change. However, current visual analytics systems usually focus on static data collections, meaning that dynamically changing conditions are not appropriately addressed. The envisioned mixed-initiative streaming visual analytics environment creates a collaboration between the analyst and the system to support the analysis process. It raises the level of discourse from low-level data records to higher-level concepts. The system supports the analyst’s rapid orientation and reorientation as situations change. It provides an environment to support the analyst’s critical thinking. It infers tasks and interests based on the analyst’s interactions. The system works as both an assistant and a devil’s advocate, finding relevant data and alerts as well as considering alternative hypotheses. Finally, the system supports sharing of findings with others. Making such an environment a reality requires research in several areas. The workshop discussions focused on four broad areas: support for critical thinking, visual representation of change, mixed-initiative analysis, and the use of narratives for analysis and communication.« less

  14. Critical periods and amblyopia.

    PubMed

    Daw, N W

    1998-04-01

    During the past 20 years, basic science has shown that there are different critical periods for different visual functions during the development of the visual system. Visual functions processed at higher anatomical levels within the system have a later critical period than functions processed at lower levels. This general principle suggests that treatments for amblyopia should be followed in a logical sequence, with treatment for each visual function to be started before its critical period is over. However, critical periods for some visual functions, such as stereopsis, are not yet fully determined, and the optimal treatment is, therefore, unknown. This article summarizes the current extent of our knowledge and points to the gaps that need to be filled.

  15. Psychological adjustment and levels of self esteem in children with visual-motor integration difficulties influences the results of a randomized intervention trial.

    PubMed

    Lahav, Orit; Apter, Alan; Ratzon, Navah Z

    2013-01-01

    This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low socioeconomic backgrounds, scoring below the 25th percentile on a measure of visual-motor integration (VMI) were recruited and randomly divided into two parallel intervention groups. One intervention group received directive visual-motor intervention (DVMI), while the second intervention group received a non-directive supportive intervention (NDSI). Tests were administered to evaluate visual-motor integration skills outcome. Children with higher baseline measures of psychological adjustment and self-esteem responded better in NDSI while children with lower baseline performance on psychological adjustment and self-esteem responded better in DVMI. This study suggests that children from low socioeconomic backgrounds with low VMI performance scores will benefit more from intervention programs if clinicians choose the type of intervention according to baseline psychological adjustment and self-esteem measures. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    PubMed Central

    Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.

    2007-01-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186

  17. Perceptions of the Visually Impaired toward Pursuing Geography Courses and Majors in Higher Education

    ERIC Educational Resources Information Center

    Murr, Christopher D.; Blanchard, R. Denise

    2011-01-01

    Advances in classroom technology have lowered barriers for the visually impaired to study geography, yet few participate. Employing stereotype threat theory, we examined whether beliefs held by the visually impaired affect perceptions toward completing courses and majors in visually oriented disciplines. A test group received a low-level threat…

  18. New insights into ambient and focal visual fixations using an automatic classification algorithm

    PubMed Central

    Follet, Brice; Le Meur, Olivier; Baccino, Thierry

    2011-01-01

    Overt visual attention is the act of directing the eyes toward a given area. These eye movements are characterised by saccades and fixations. A debate currently surrounds the role of visual fixations. Do they all have the same role in the free viewing of natural scenes? Recent studies suggest that at least two types of visual fixations exist: focal and ambient. The former is believed to be used to inspect local areas accurately, whereas the latter is used to obtain the context of the scene. We investigated the use of an automated system to cluster visual fixations in two groups using four types of natural scene images. We found new evidence to support a focal–ambient dichotomy. Our data indicate that the determining factor is the saccade amplitude. The dependence on the low-level visual features and the time course of these two kinds of visual fixations were examined. Our results demonstrate that there is an interplay between both fixation populations and that focal fixations are more dependent on low-level visual features than are ambient fixations. PMID:23145248

  19. Do rats use shape to solve “shape discriminations”?

    PubMed Central

    Minini, Loredana; Jeffery, Kathryn J.

    2006-01-01

    Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141

  20. Attentional and physiological processing of food images in functional dyspepsia patients: A pilot study.

    PubMed

    Lee, In-Seon; Preissl, Hubert; Giel, Katrin; Schag, Kathrin; Enck, Paul

    2018-01-23

    The food-related behavior of functional dyspepsia has been attracting more interest of late. This pilot study aims to provide evidence of the physiological, emotional, and attentional aspects of food processing in functional dyspepsia patients. The study was performed in 15 functional dyspepsia patients and 17 healthy controls after a standard breakfast. We measured autonomic nervous system activity using skin conductance response and heart rate variability, emotional response using facial electromyography, and visual attention using eyetracking during the visual stimuli of food/non-food images. In comparison to healthy controls, functional dyspepsia patients showed a greater craving for food, a decreased intake of food, more dyspeptic symptoms, lower pleasantness rating of food images (particularly of high fat), decreased low frequency/high frequency ratio of heart rate variability, and suppressed total processing time of food images. There were no significant differences of skin conductance response and facial electromyography data between groups. The results suggest that high level cognitive functions rather than autonomic and emotional mechanisms are more liable to function differently in functional dyspepsia patients. Abnormal dietary behavior, reduced subjective rating of pleasantness and visual attention to food should be considered as important pathophysiological characteristics in functional dyspepsia.

  1. Underlying mechanisms of writing difficulties among children with neurofibromatosis type 1.

    PubMed

    Gilboa, Yafit; Josman, Naomi; Fattal-Valevski, Aviva; Toledano-Alhadef, Hagit; Rosenblum, Sara

    2014-06-01

    Writing is a complex activity in which lower-level perceptual-motor processes and higher-level cognitive processes continuously interact. Preliminary evidence suggests that writing difficulties are common to children with Neurofibromatosis type 1 (NF1). The aim of this study was to compare the performance of children with and without NF1 in lower (visual perception, motor coordination and visual-motor integration) and higher processes (verbal and performance intelligence, visual spatial organization and visual memory) required for intact writing; and to identify the components that predict the written product's spatial arrangement and content among children with NF1. Thirty children with NF1 (ages 8-16) and 30 typically developing children matched by gender and age were tested, using standardized assessments. Children with NF1 had a significantly inferior performance in comparison to control children, on all tests that measured lower and higher level processes. The cognitive planning skill was found as a predictor of the written product's spatial arrangement. The verbal intelligence predicted the written content level. Results suggest that high level processes underlie the poor quality of writing product in children with NF1. Treatment approaches for children with NF1 must include detailed assessments of cognitive planning and language skills. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Perceptual load-dependent neural correlates of distractor interference inhibition.

    PubMed

    Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N

    2011-01-18

    The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.

  3. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing

    PubMed Central

    Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2017-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663

  4. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing.

    PubMed

    Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2016-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.

  5. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  6. Speed but not amplitude of visual feedback exacerbates force variability in older adults.

    PubMed

    Kim, Changki; Yacoubi, Basma; Christou, Evangelos A

    2018-06-23

    Magnification of visual feedback (VF) impairs force control in older adults. In this study, we aimed to determine whether the age-associated increase in force variability with magnification of visual feedback is a consequence of increased amplitude or speed of visual feedback. Seventeen young and 18 older adults performed a constant isometric force task with the index finger at 5% of MVC. We manipulated the vertical (force gain) and horizontal (time gain) aspect of the visual feedback so participants performed the task with the following VF conditions: (1) high amplitude-fast speed; (2) low amplitude-slow speed; (3) high amplitude-slow speed. Changing the visual feedback from low amplitude-slow speed to high amplitude-fast speed increased force variability in older adults but decreased it in young adults (P < 0.01). Changing the visual feedback from low amplitude-slow speed to high amplitude-slow speed did not alter force variability in older adults (P > 0.2), but decreased it in young adults (P < 0.01). Changing the visual feedback from high amplitude-slow speed to high amplitude-fast speed increased force variability in older adults (P < 0.01) but did not alter force variability in young adults (P > 0.2). In summary, increased force variability in older adults with magnification of visual feedback was evident only when the speed of visual feedback increased. Thus, we conclude that in older adults deficits in the rate of processing visual information and not deficits in the processing of more visual information impair force control.

  7. The theoretical cognitive process of visualization for science education.

    PubMed

    Mnguni, Lindelani E

    2014-01-01

    The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question "how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.

  8. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858

  9. Adjustable typography: an approach to enhancing low vision text accessibility.

    PubMed

    Arditi, Aries

    2004-04-15

    Millions of people have low vision, a disability condition caused by uncorrectable or partially correctable disorders of the eye. The primary goal of low vision rehabilitation is increasing access to printed material. This paper describes how adjustable typography, a computer graphic approach to enhancing text accessibility, can play a role in this process, by allowing visually-impaired users to customize fonts to maximize legibility according to their own visual needs. Prototype software and initial testing of the concept is described. The results show that visually-impaired users tend to produce a variety of very distinct fonts, and that the adjustment process results in greatly enhanced legibility. But this initial testing has not yet demonstrated increases in legibility over and above the legibility of highly legible standard fonts such as Times New Roman.

  10. G-Induced Visual Symptoms in a Military Helicopter Pilot.

    PubMed

    McMahon, Terry W; Newman, David G

    2016-11-01

    Military helicopters are increasingly agile and capable of producing significant G forces experienced in the longitudinal (z) axis of the body in a head-to-foot direction (+Gz). Dehydration and fatigue can adversely affect a pilot's +Gz tolerance, leading to +Gz-induced symptomatology occurring at lower +Gz levels than expected. The potential for adverse consequences of +Gz exposure to affect flight safety in military helicopter operations needs to be recognized. This case report describes a helicopter pilot who experienced +Gz-induced visual impairment during low-level flight. The incident occurred during a tropical training exercise, with an ambient temperature of around 35°C (95°F). As a result of the operational tempo and the environmental conditions, aircrew were generally fatigued and dehydrated. During a low-level steep turn, a Blackhawk pilot experienced significant visual deterioration. The +Gz level was estimated at +2.5 Gz. After completing the turn, the pilot's vision returned to normal, and the flight concluded without further incident. This case highlights the potential dangers of +Gz exposure in tactical helicopters. Although the +Gz level was moderate, the pilot's +Gz tolerance was reduced by the combined effects of dehydration and fatigue. The dangers of such +Gz-induced visual impairment during low-level flight are clear. More awareness of +Gz physiology and +Gz tolerance-reducing factors in helicopter operations is needed. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  11. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  12. Edge co-occurrences can account for rapid categorization of natural versus animal images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.; Bednar, James A.

    2015-06-01

    Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.

  13. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  14. Non-conscious processing of motion coherence can boost conscious access.

    PubMed

    Kaunitz, Lisandro; Fracasso, Alessio; Lingnau, Angelika; Melcher, David

    2013-01-01

    Research on the scope and limits of non-conscious vision can advance our understanding of the functional and neural underpinnings of visual awareness. Here we investigated whether distributed local features can be bound, outside of awareness, into coherent patterns. We used continuous flash suppression (CFS) to create interocular suppression, and thus lack of awareness, for a moving dot stimulus that varied in terms of coherence with an overall pattern (radial flow). Our results demonstrate that for radial motion, coherence favors the detection of patterns of moving dots even under interocular suppression. Coherence caused dots to break through the masks more often: this indicates that the visual system was able to integrate low-level motion signals into a coherent pattern outside of visual awareness. In contrast, in an experiment using meaningful or scrambled biological motion we did not observe any increase in the sensitivity of detection for meaningful patterns. Overall, our results are in agreement with previous studies on face processing and with the hypothesis that certain features are spatiotemporally bound into coherent patterns even outside of attention or awareness.

  15. Causal reports: Context-dependent contributions of intuitive physics and visual impressions of launching.

    PubMed

    Vicovaro, Michele

    2018-05-01

    Everyday causal reports appear to be based on a blend of perceptual and cognitive processes. Causality can sometimes be perceived automatically through low-level visual processing of stimuli, but it can also be inferred on the basis of an intuitive understanding of the physical mechanism that underlies an observable event. We investigated how visual impressions of launching and the intuitive physics of collisions contribute to the formation of explicit causal responses. In Experiment 1, participants observed collisions between realistic objects differing in apparent material and hence implied mass, whereas in Experiment 2, participants observed collisions between abstract, non-material objects. The results of Experiment 1 showed that ratings of causality were mainly driven by the intuitive physics of collisions, whereas the results of Experiment 2 provide some support to the hypothesis that ratings of causality were mainly driven by visual impressions of launching. These results suggest that stimulus factors and experimental design factors - such as the realism of the stimuli and the variation in the implied mass of the colliding objects - may determine the relative contributions of perceptual and post-perceptual cognitive processes to explicit causal responses. A revised version of the impetus transmission heuristic provides a satisfactory explanation for these results, whereas the hypothesis that causal responses and intuitive physics are based on the internalization of physical laws does not. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.

    PubMed

    Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.

  17. Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.

    PubMed

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-09-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.

  18. Perceptual load influences selective attention across development.

    PubMed

    Couperus, Jane W

    2011-09-01

    Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual selective attention across development from 7 years of age to adulthood. Specifically, the author examined if changes in processing as a function of selective attention are similarly influenced by perceptual load across development. Participants were asked to complete a task at either low or high perceptual load while processing of an unattended probe stimulus was examined using event related potentials. Similar to adults, children and teens showed reduced processing of the unattended stimulus as perceptual load increased at the P1 visual component. However, although there were no qualitative differences in changes in processing, there were quantitative differences, with shorter P1 latencies in teens and adults compared with children, suggesting increases in the speed of processing across development. In addition, younger children did not need as high a perceptual load to achieve the same difference in performance between low and high perceptual load as adults. Thus, this study demonstrates that although there are developmental changes in visual selective attention, the mechanisms by which visual selective attention is achieved in children may share similarities with adults.

  19. The company they keep: Background similarity influences transfer of aftereffects from second- to first-order stimuli

    PubMed Central

    Qian, Ning; Dayan, Peter

    2013-01-01

    A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217

  20. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    PubMed

    Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and non-autistic individuals.

  1. Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?

    PubMed Central

    Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive perceptual abilities differently in autistic and non-autistic individuals. PMID:25117450

  2. Distinct spatial frequency sensitivities for processing faces and emotional expressions.

    PubMed

    Vuilleumier, Patrik; Armony, Jorge L; Driver, Jon; Dolan, Raymond J

    2003-06-01

    High and low spatial frequency information in visual images is processed by distinct neural channels. Using event-related functional magnetic resonance imaging (fMRI) in humans, we show dissociable roles of such visual channels for processing faces and emotional fearful expressions. Neural responses in fusiform cortex, and effects of repeating the same face identity upon fusiform activity, were greater with intact or high-spatial-frequency face stimuli than with low-frequency faces, regardless of emotional expression. In contrast, amygdala responses to fearful expressions were greater for intact or low-frequency faces than for high-frequency faces. An activation of pulvinar and superior colliculus by fearful expressions occurred specifically with low-frequency faces, suggesting that these subcortical pathways may provide coarse fear-related inputs to the amygdala.

  3. Testing the generality of the zoom-lens model: Evidence for visual-pathway specific effects of attended-region size on perception.

    PubMed

    Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark

    2017-05-01

    There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.

  4. Perceptual load corresponds with factors known to influence visual search.

    PubMed

    Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P

    2013-10-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    PubMed

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Early Visual Foraging in Relationship to Familial Risk for Autism and Hyperactivity/Inattention.

    PubMed

    Gliga, Teodora; Smith, Tim J; Likely, Noreen; Charman, Tony; Johnson, Mark H

    2015-12-04

    Information foraging is atypical in both autism spectrum disorders (ASDs) and ADHD; however, while ASD is associated with restricted exploration and preference for sameness, ADHD is characterized by hyperactivity and increased novelty seeking. Here, we ask whether similar biases are present in visual foraging in younger siblings of children with a diagnosis of ASD with or without additional high levels of hyperactivity and inattention. Fifty-four low-risk controls (LR) and 50 high-risk siblings (HR) took part in an eye-tracking study at 8 and 14 months and at 3 years of age. At 8 months, siblings of children with ASD and low levels of hyperactivity/inattention (HR/ASD-HI) were more likely to return to previously visited areas in the visual scene than were LR and siblings of children with ASD and high levels of hyperactivity/inattention (HR/ASD+HI). We show that visual foraging is atypical in infants at-risk for ASD. We also reveal a paradoxical effect, in that additional family risk for ADHD core symptoms mitigates the effect of ASD risk on visual information foraging. © The Author(s) 2015.

  7. Low-level and high-level modulations of fixational saccades and high frequency oscillatory brain activity in a visual object classification task

    PubMed Central

    Kosilo, Maciej; Wuerger, Sophie M.; Craddock, Matt; Jennings, Ben J.; Hunt, Amelia R.; Martinovic, Jasna

    2013-01-01

    Until recently induced gamma-band activity (GBA) was considered a neural marker of cortical object representation. However, induced GBA in the electroencephalogram (EEG) is susceptible to artifacts caused by miniature fixational saccades. Recent studies have demonstrated that fixational saccades also reflect high-level representational processes. Do high-level as opposed to low-level factors influence fixational saccades? What is the effect of these factors on artifact-free GBA? To investigate this, we conducted separate eye tracking and EEG experiments using identical designs. Participants classified line drawings as objects or non-objects. To introduce low-level differences, contours were defined along different directions in cardinal color space: S-cone-isolating, intermediate isoluminant, or a full-color stimulus, the latter containing an additional achromatic component. Prior to the classification task, object discrimination thresholds were measured and stimuli were scaled to matching suprathreshold levels for each participant. In both experiments, behavioral performance was best for full-color stimuli and worst for S-cone isolating stimuli. Saccade rates 200–700 ms after stimulus onset were modulated independently by low and high-level factors, being higher for full-color stimuli than for S-cone isolating stimuli and higher for objects. Low-amplitude evoked GBA and total GBA were observed in very few conditions, showing that paradigms with isoluminant stimuli may not be ideal for eliciting such responses. We conclude that cortical loops involved in the processing of objects are preferentially excited by stimuli that contain achromatic information. Their activation can lead to relatively early exploratory eye movements even for foveally-presented stimuli. PMID:24391611

  8. Emotion and Perception: The Role of Affective Information

    PubMed Central

    Zadra, Jonathan R.; Clore, Gerald L.

    2011-01-01

    Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565

  9. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  10. The Effects of Similarity on High-Level Visual Working Memory Processing.

    PubMed

    Yang, Li; Mo, Lei

    2017-01-01

    Similarity has been observed to have opposite effects on visual working memory (VWM) for complex images. How can these discrepant results be reconciled? To answer this question, we used a change-detection paradigm to test visual working memory performance for multiple real-world objects. We found that working memory for moderate similarity items was worse than that for either high or low similarity items. This pattern was unaffected by manipulations of stimulus type (faces vs. scenes), encoding duration (limited vs. self-paced), and presentation format (simultaneous vs. sequential). We also found that the similarity effects differed in strength in different categories (scenes vs. faces). These results suggest that complex real-world objects are represented using a centre-surround inhibition organization . These results support the category-specific cortical resource theory and further suggest that centre-surround inhibition organization may differ by category.

  11. Visual selection and maintenance of the cell lines with high plant regeneration ability and low ploidy level in Dianthus acicularis by monitoring with flow cytometry analysis.

    PubMed

    Shiba, Tomonori; Mii, Masahiro

    2005-12-01

    Efficient plant regeneration system from cell suspension cultures was established in D. acicularis (2n=90) by monitoring ploidy level and visual selection of the cultures. The ploidy level of the cell cultures closely related to the shoot regeneration ability. The cell lines comprising original ploidy levels (2C+4C cells corresponding to DNA contents of G1 and G2 cells of diploid plant, respectively) showed high regeneration ability, whereas those containing the cells with 8C or higher DNA C-values showed low or no regeneration ability. The highly regenerable cell lines thus selected consisted of compact cell clumps with yellowish color and relatively moderate growth, suggesting that it is possible to select visually the highly regenerable cell lines with the original ploidy level. All the regenerated plantlets from the highly regenerable cell cultures exhibited normal phenotypes and no variations in ploidy level were observed by flow cytometry (FCM) analysis.

  12. Projectors, associators, visual imagery, and the time course of visual processing in grapheme-color synesthesia.

    PubMed

    Amsel, Ben D; Kutas, Marta; Coulson, Seana

    2017-10-01

    In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space.

  13. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  14. Visual-spatial processing and working-memory load as a function of negative and positive psychotic-like experiences.

    PubMed

    Abu-Akel, A; Reniers, R L E P; Wood, S J

    2016-09-01

    Patients with schizophrenia show impairments in working-memory and visual-spatial processing, but little is known about the dynamic interplay between the two. To provide insight into this important question, we examined the effect of positive and negative symptom expressions in healthy adults on perceptual processing while concurrently performing a working-memory task that requires the allocations of various degrees of cognitive resources. The effect of positive and negative symptom expressions in healthy adults (N = 91) on perceptual processing was examined in a dual-task paradigm of visual-spatial working memory (VSWM) under three conditions of cognitive load: a baseline condition (with no concurrent working-memory demand), a low VSWM load condition, and a high VSWM load condition. Participants overall performed more efficiently (i.e., faster) with increasing cognitive load. This facilitation in performance was unrelated to symptom expressions. However, participants with high-negative, low-positive symptom expressions were less accurate in the low VSWM condition compared to the baseline and the high VSWM load conditions. Attenuated, subclinical expressions of psychosis affect cognitive performance that is impaired in schizophrenia. The "resource limitations hypothesis" may explain the performance of the participants with high-negative symptom expressions. The dual-task of visual-spatial processing and working memory may be beneficial to assessing the cognitive phenotype of individuals with high risk for schizophrenia spectrum disorders.

  15. A strategy to optimize CT pediatric dose with a visual discrimination model

    NASA Astrophysics Data System (ADS)

    Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.

    2008-03-01

    Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.

  16. A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space

    PubMed Central

    Zheng, Wei; Zhang, Xiaoya; Lu, Qi

    2015-01-01

    This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618

  17. Interactions between auditory and visual semantic stimulus classes: evidence for common processing networks for speech and body actions.

    PubMed

    Meyer, Georg F; Greenlee, Mark; Wuerger, Sophie

    2011-09-01

    Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.

  18. Effects of Congruent and Incongruent Stimulus Colour on Flavour Discriminations.

    PubMed

    Wieneke, Leonie; Schmuck, Pauline; Zacher, Julia; Greenlee, Mark W; Plank, Tina

    2018-01-01

    In addition to gustatory, olfactory and somatosensory input, visual information plays a role in our experience of food and drink. We asked whether colour in this context has an effect at the perceptual level via multisensory integration or if higher level cognitive factors are involved. Using an articulatory suppression task, comparable to Stevenson and Oaten, cognitive processes should be interrupted during a flavour discriminatory task, so that any residual colour effects would be traceable to low-level integration. Subjects judged in a three-alternative forced-choice paradigm the presence of a different flavour (triangle test). On each trial, they tasted three liquids from identical glasses, with one of them containing a different flavour. The substances were congruent in colour and flavour, incongruent or uncoloured. Subjects who performed the articulatory suppression task responded faster and made fewer errors. The findings suggest a role for higher level cognitive processing in the effect of colour on flavour judgements.

  19. Effects of Congruent and Incongruent Stimulus Colour on Flavour Discriminations

    PubMed Central

    Wieneke, Leonie; Schmuck, Pauline; Zacher, Julia; Plank, Tina

    2018-01-01

    In addition to gustatory, olfactory and somatosensory input, visual information plays a role in our experience of food and drink. We asked whether colour in this context has an effect at the perceptual level via multisensory integration or if higher level cognitive factors are involved. Using an articulatory suppression task, comparable to Stevenson and Oaten, cognitive processes should be interrupted during a flavour discriminatory task, so that any residual colour effects would be traceable to low-level integration. Subjects judged in a three-alternative forced-choice paradigm the presence of a different flavour (triangle test). On each trial, they tasted three liquids from identical glasses, with one of them containing a different flavour. The substances were congruent in colour and flavour, incongruent or uncoloured. Subjects who performed the articulatory suppression task responded faster and made fewer errors. The findings suggest a role for higher level cognitive processing in the effect of colour on flavour judgements. PMID:29755721

  20. Visual search for object categories is predicted by the representational architecture of high-level visual cortex

    PubMed Central

    Alvarez, George A.; Nakayama, Ken; Konkle, Talia

    2016-01-01

    Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600

  1. Language-Mediated Visual Orienting Behavior in Low and High Literates

    PubMed Central

    Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar

    2011-01-01

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083

  2. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    NASA Technical Reports Server (NTRS)

    Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.

  3. A multi-pathway hypothesis for human visual fear signaling

    PubMed Central

    Silverstein, David N.; Ingvar, Martin

    2015-01-01

    A hypothesis is proposed for five visual fear signaling pathways in humans, based on an analysis of anatomical connectivity from primate studies and human functional connectvity and tractography from brain imaging studies. Earlier work has identified possible subcortical and cortical fear pathways known as the “low road” and “high road,” which arrive at the amygdala independently. In addition to a subcortical pathway, we propose four cortical signaling pathways in humans along the visual ventral stream. All four of these traverse through the LGN to the visual cortex (VC) and branching off at the inferior temporal area, with one projection directly to the amygdala; another traversing the orbitofrontal cortex; and two others passing through the parietal and then prefrontal cortex, one excitatory pathway via the ventral-medial area and one regulatory pathway via the ventral-lateral area. These pathways have progressively longer propagation latencies and may have progressively evolved with brain development to take advantage of higher-level processing. Using the anatomical path lengths and latency estimates for each of these five pathways, predictions are made for the relative processing times at selective ROIs and arrival at the amygdala, based on the presentation of a fear-relevant visual stimulus. Partial verification of the temporal dynamics of this hypothesis might be accomplished using experimental MEG analysis. Possible experimental protocols are suggested. PMID:26379513

  4. A rodent model for the study of invariant visual object recognition

    PubMed Central

    Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.

    2009-01-01

    The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704

  5. Piaget's Water-Level Task: The Impact of Vision on Performance

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Koustriava, Eleni

    2011-01-01

    In the present study, the aim was to examine the differences in performance between children and adolescents with visual impairment and sighted peers in the water-level task. Twenty-eight individuals with visual impairments, 14 individuals with blindness and 14 individuals with low vision, and 28 sighted individuals participated in the present…

  6. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  7. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  8. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  9. Persistent recruitment of somatosensory cortex during active maintenance of hand images in working memory.

    PubMed

    Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B

    2018-07-01

    Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    PubMed

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  11. Cross-modal perceptual load: the impact of modality and individual differences.

    PubMed

    Sandhu, Rajwant; Dyson, Benjamin James

    2016-05-01

    Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.

  12. Emotion improves and impairs early vision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-06-01

    Recent studies indicate that emotion enhances early vision, but the generality of this finding remains unknown. Do the benefits of emotion extend to all basic aspects of vision, or are they limited in scope? Our results show that the brief presentation of a fearful face, compared with a neutral face, enhances sensitivity for the orientation of subsequently presented low-spatial-frequency stimuli, but diminishes orientation sensitivity for high-spatial-frequency stimuli. This is the first demonstration that emotion not only improves but also impairs low-level vision. The selective low-spatial-frequency benefits are consistent with the idea that emotion enhances magnocellular processing. Additionally, we suggest that the high-spatial-frequency deficits are due to inhibitory interactions between magnocellular and parvocellular pathways. Our results suggest an emotion-induced trade-off in visual processing, rather than a general improvement. This trade-off may benefit perceptual dimensions that are relevant for survival at the expense of those that are less relevant.

  13. Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information.

    PubMed

    Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J

    2015-07-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.

  14. Do Pattern-Focused Visuals Improve Skin Self-Examination Performance? Explicating the Visual Skill Acquisition Model

    PubMed Central

    JOHN, KEVIN K.; JENSEN, JAKOB D.; KING, ANDY J.; RATCLIFF, CHELSEA L.; GROSSMAN, DOUGLAS

    2017-01-01

    Skin self-examination (SSE) consists of routinely checking the body for atypical moles that might be cancerous. Identifying atypical moles is a visual task; thus, SSE training materials utilize pattern-focused visuals to cultivate this skill. Despite widespread use, researchers have yet to explicate how pattern-focused visuals cultivate visual skill. Using eye tracking to capture the visual scanpaths of a sample of laypersons (N = 92), the current study employed a 2 (pattern: ABCDE vs. ugly duckling sign [UDS]) × 2 (presentation: photorealistic images vs. illustrations) factorial design to assess whether and how pattern-focused visuals can increase layperson accuracy in identifying atypical moles. Overall, illustrations resulted in greater sensitivity, while photos resulted in greater specificity. The UDS × photorealistic condition showed greatest specificity. For those in the photo condition with high self-efficacy, UDS increased specificity directly. For those in the photo condition with self-efficacy levels at the mean or lower, there was a conditional indirect effect such that these individuals spent a larger amount of their viewing time observing the atypical moles, and time on target was positively related to specificity. Illustrations provided significant gains in specificity for those with low-to-moderate self-efficacy by increasing total fixation time on the atypical moles. Findings suggest that maximizing visual processing efficiency could enhance existing SSE training techniques. PMID:28759333

  15. Direct Observation of Individual Charges and Their Dynamics on Graphene by Low-Energy Electron Holography.

    PubMed

    Latychevskaia, Tatiana; Wicki, Flavio; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2016-09-14

    Visualizing individual charges confined to molecules and observing their dynamics with high spatial resolution is a challenge for advancing various fields in science, ranging from mesoscopic physics to electron transfer events in biological molecules. We show here that the high sensitivity of low-energy electrons to local electric fields can be employed to directly visualize individual charged adsorbates and to study their behavior in a quantitative way. This makes electron holography a unique probing tool for directly visualizing charge distributions with a sensitivity of a fraction of an elementary charge. Moreover, spatial resolution in the nanometer range and fast data acquisition inherent to lens-less low-energy electron holography allows for direct visual inspection of charge transfer processes.

  16. The Impact of Density and Ratio on Object-Ensemble Representation in Human Anterior-Medial Ventral Visual Cortex

    PubMed Central

    Cant, Jonathan S.; Xu, Yaoda

    2015-01-01

    Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. PMID:24964917

  17. Perceptual Load-Dependent Neural Correlates of Distractor Interference Inhibition

    PubMed Central

    Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M.; Potenza, Marc N.

    2011-01-01

    Background The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. Methodology/Principal Findings We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Conclusions Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load. PMID:21267080

  18. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  19. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  20. Training-induced recovery of low-level vision followed by mid-level perceptual improvements in developmental object and face agnosia

    PubMed Central

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5–6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. PMID:24698161

  1. Training-induced recovery of low-level vision followed by mid-level perceptual improvements in developmental object and face agnosia.

    PubMed

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5-6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  2. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  3. Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?

    PubMed

    Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni

    2015-09-01

    The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).

  4. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    NASA Astrophysics Data System (ADS)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  5. The Relation of Visual and Auditory Aptitudes to First Grade Low Readers' Achievement under Sight-Word and Systematic Phonic Instructions. Research Report #36.

    ERIC Educational Resources Information Center

    Gallistel, Elizabeth; And Others

    Ten auditory and ten visual aptitude measures were administered in the middle of first grade to a sample of 58 low readers. More than half of this low reader sample had scored more than a year below expected grade level on two or more aptitudes. Word recognition measures were administered after four months of sight word instruction and again after…

  6. Conceptual analysis of Physiology of vision in Ayurveda.

    PubMed

    Balakrishnan, Praveen; Ashwini, M J

    2014-07-01

    The process by which the world outside is seen is termed as visual process or physiology of vision. There are three phases in this visual process: phase of refraction of light, phase of conversion of light energy into electrical impulse and finally peripheral and central neurophysiology. With the advent of modern instruments step by step biochemical changes occurring at each level of the visual process has been deciphered. Many investigations have emerged to track these changes and helping to diagnose the exact nature of the disease. Ayurveda has described this physiology of vision based on the functions of vata and pitta. Philosophical textbook of ayurveda, Tarka Sangraha, gives certain basics facts of visual process. This article discusses the second and third phase of visual process. Step by step analysis of the visual process through the spectacles of ayurveda amalgamated with the basics of philosophy from Tarka Sangraha has been analyzed critically to generate a concrete idea regarding the physiology and hence thereby interpret the pathology on the grounds of ayurveda based on the investigative reports.

  7. Interactions between attention, context and learning in primary visual cortex.

    PubMed

    Gilbert, C; Ito, M; Kapadia, M; Westheimer, G

    2000-01-01

    Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.

  8. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  9. Visual Acuity does not Moderate Effect Sizes of Higher-Level Cognitive Tasks

    PubMed Central

    Houston, James R.; Bennett, Ilana J.; Allen, Philip A.; Madden, David J.

    2016-01-01

    Background Declining visual capacities in older adults have been posited as a driving force behind adult age differences in higher-order cognitive functions (e.g., the “common cause” hypothesis of Lindenberger & Baltes, 1994). McGowan, Patterson and Jordan (2013) also found that a surprisingly large number of published cognitive aging studies failed to include adequate measures of visual acuity. However, a recent meta-analysis of three studies (LaFleur & Salthouse, 2014) failed to find evidence that visual acuity moderated or mediated age differences in higher-level cognitive processes. In order to provide a more extensive test of whether visual acuity moderates age differences in higher-level cognitive processes, we conducted a more extensive meta-analysis of topic. Methods Using results from 456 studies, we calculated effect sizes for the main effect of age across four cognitive domains (attention, executive function, memory, and perception/language) separately for five levels of visual acuity criteria (no criteria, undisclosed criteria, self-reported acuity, 20/80-20/31, and 20/30 or better). Results As expected, age had a significant effect on each cognitive domain. However, these age effects did not further differ as a function of visual acuity criteria. Conclusion The current meta-analytic, cross-sectional results suggest that visual acuity is not significantly related to age group differences in higher-level cognitive performance—thereby replicating LaFleur and Salthouse (2014). Further efforts are needed to determine whether other measures of visual functioning (e.g. contrast sensitivity, luminance) affect age differences in cognitive functioning. PMID:27070044

  10. Cognitive predictors of treatment response to bupropion and cognitive effects of bupropion in patients with major depressive disorder.

    PubMed

    Herrera-Guzmán, Ixchel; Gudayol-Ferré, Esteve; Lira-Mandujano, Jennifer; Herrera-Abarca, Jorge; Herrera-Guzmán, Daniel; Montoya-Pérez, Karina; Guardia-Olmos, Joan

    2008-07-15

    Cognitive effects of antidepressants and cognitive predictors of antidepressant treatment response are recent focuses of interest in the neuropsychology of depression. We studied the cognitive predictors of treatment response to bupropion and its neuropsychological effects in patients with major depressive disorder. Twenty subjects meeting the DSM-IV criteria for major depressive disorder were assessed with the Hamilton Depression Rating Scale and a neuropsychological battery. Subjects were medicated with 150 mg/day of bupropion sustained release for 8 weeks. At the end of the trial, 12 subjects were classified as responders to treatment and 8 were non-responders. Our findings suggest that low pretreatment measures of visual memory and low levels of mental processing speed are predictive of good response to bupropion. The cognitive effects of bupropion after the treatment showed that patients improved in visual memory measures and in mental processing speed. Our results suggest that cognitive predictors of treatment response to bupropion and cognitive effects of bupropion in patients with major depressive disorder could be closely related. These findings need to be replicated due to the exploratory nature of the present work.

  11. Visual form predictions facilitate auditory processing at the N1.

    PubMed

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  12. Vigilance and iconic memory in children at high risk for alcoholism.

    PubMed

    Steinhauer, S R; Locke, J; Hill, S Y

    1997-07-01

    Previous studies report reduced visual event-related potential (ERP) amplitudes in young males at high risk for alcoholism. These findings could involve difficulties at several stages of visual processing. This study was aimed at examining vigilance performance and iconic memory functions in children at high risk or low risk for alcoholism. Sustained vigilance and retrieval from iconic memory were evaluated in 54 (29 male) white children at high risk and 47 (25 male) white children at low risk for developing alcoholism. Children were also grouped according to gender and age (younger: 8-12 years; older: 13-18 years). No differences is visual sensitivity, response criterion or reaction time were associated with risk status on the degraded visual stimulus version of the Continuous Performance Test. For the Span of Apprehension, no differences were found due to risk status when only 1 or 5 distractors were presented, although with 9 distractors a significant effect of risk status was found when it was tested as an interaction with gender and age (decreased accuracy for older high-risk boys compared to older low-risk boys). These findings suggest that ERP deviations are not attributable to stages of visual processing deficits, but represent difficulty involving more complex utilization of information. Implications of these results are that the differences between high- and low-risk children that have been reported previously for visual ERP components (e.g., P300) are not attributable to deficits of attentional or iconic memory mechanisms.

  13. Neural correlates of emotional intelligence in a visual emotional oddball task: an ERP study.

    PubMed

    Raz, Sivan; Dan, Orrie; Zysberg, Leehu

    2014-11-01

    The present study was aimed at identifying potential behavioral and neural correlates of Emotional Intelligence (EI) by using scalp-recorded Event-Related Potentials (ERPs). EI levels were defined according to both self-report questionnaire and a performance-based ability test. We identified ERP correlates of emotional processing by using a visual-emotional oddball paradigm, in which subjects were confronted with one frequent standard stimulus (a neutral face) and two deviant stimuli (a happy and an angry face). The effects of these faces were then compared across groups with low and high EI levels. The ERP results indicate that participants with high EI exhibited significantly greater mean amplitudes of the P1, P2, N2, and P3 ERP components in response to emotional and neutral faces, at frontal, posterior-parietal and occipital scalp locations. P1, P2 and N2 are considered indexes of attention-related processes and have been associated with early attention to emotional stimuli. The later P3 component has been thought to reflect more elaborative, top-down, emotional information processing including emotional evaluation and memory encoding and formation. These results may suggest greater recruitment of resources to process all emotional and non-emotional faces at early and late processing stages among individuals with higher EI. The present study underscores the usefulness of ERP methodology as a sensitive measure for the study of emotional stimuli processing in the research field of EI. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Oculomotor guidance and capture by irrelevant faces.

    PubMed

    Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan

    2012-01-01

    Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.

  15. The Social Lives of Canadian Youths with Visual Impairments

    ERIC Educational Resources Information Center

    Gold, Deborah; Shaw, Alexander; Wolffe, Karen

    2010-01-01

    This survey of the social and leisure experiences of Canadian youths with visual impairments found that, in general, youths with low vision experienced more social challenges than did their peers who were blind. Levels of social support were not found to differ on the basis of level of vision, sex, or age. (Contains 1 figure and 1 table.)

  16. Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network

    NASA Astrophysics Data System (ADS)

    Ong, Jia Jan; Ang, L.-M.; Seng, K. P.

    This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.

  17. VEP contrast sensitivity responses reveal reduced functional segregation of mid and high filters of visual channels in autism.

    PubMed

    Jemel, Boutheina; Mimeault, Daniel; Saint-Amour, Dave; Hosein, Anthony; Mottron, Laurent

    2010-06-01

    Despite the vast amount of behavioral data showing a pronounced tendency in individuals with autism spectrum disorder (ASD) to process fine visual details, much less is known about the neurophysiological characteristics of spatial vision in ASD. Here, we address this issue by assessing the contrast sensitivity response properties of the early visual-evoked potentials (VEPs) to sine-wave gratings of low, medium and high spatial frequencies in adults with ASD and in an age- and IQ-matched control group. Our results show that while VEP contrast responses to low and high spatial frequency gratings did not differ between ASD and controls, early VEPs to mid spatial frequency gratings exhibited similar response characteristics as those to high spatial frequency gratings in ASD. Our findings show evidence for an altered functional segregation of early visual channels, especially those responsible for processing mid- and high-frequency spatial scales.

  18. Proline and COMT Status Affect Visual Connectivity in Children with 22q11.2 Deletion Syndrome

    PubMed Central

    Magnée, Maurice J. C. M.; Lamme, Victor A. F.; de Sain-van der Velden, Monique G. M.; Vorstman, Jacob A. S.; Kemner, Chantal

    2011-01-01

    Background Individuals with the 22q11.2 deletion syndrome (22q11DS) are at increased risk for schizophrenia and Autism Spectrum Disorders (ASDs). Given the prevalence of visual processing deficits in these three disorders, a causal relationship between genes in the deleted region of chromosome 22 and visual processing is likely. Therefore, 22q11DS may represent a unique model to understand the neurobiology of visual processing deficits related with ASD and psychosis. Methodology We measured Event-Related Potentials (ERPs) during a texture segregation task in 58 children with 22q11DS and 100 age-matched controls. The C1 component was used to index afferent activity of visual cortex area V1; the texture negativity wave provided a measure for the integrity of recurrent connections in the visual cortical system. COMT genotype and plasma proline levels were assessed in 22q11DS individuals. Principal Findings Children with 22q11DS showed enhanced feedforward activity starting from 70 ms after visual presentation. ERP activity related to visual feedback activity was reduced in the 22q11DS group, which was seen as less texture negativity around 150 ms post presentation. Within the 22q11DS group we further demonstrated an association between high plasma proline levels and aberrant feedback/feedforward ratios, which was moderated by the COMT 158 genotype. Conclusions These findings confirm the presence of early visual processing deficits in 22q11DS. We discuss these in terms of dysfunctional synaptic plasticity in early visual processing areas, possibly associated with deviant dopaminergic and glutamatergic transmission. As such, our findings may serve as a promising biomarker related to the development of schizophrenia among 22q11DS individuals. PMID:21998713

  19. Music in Research and Rehabilitation of Disorders of Consciousness: Psychological and Neurophysiological Foundations.

    PubMed

    Kotchoubey, Boris; Pavlov, Yuri G; Kleber, Boris

    2015-01-01

    According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.

  20. Music in Research and Rehabilitation of Disorders of Consciousness: Psychological and Neurophysiological Foundations

    PubMed Central

    Kotchoubey, Boris; Pavlov, Yuri G.; Kleber, Boris

    2015-01-01

    According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients’ self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation. PMID:26640445

  1. The effort to close the gap: Tracking the development of illusory contour processing from childhood to adulthood with high-density electrical mapping

    PubMed Central

    Altschuler, Ted S.; Molholm, Sophie; Butler, John S.; Mercier, Manuel R.; Brandwein, Alice B.; Foxe, John J.

    2014-01-01

    The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230-400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N= 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern - engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5 years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. PMID:24365674

  2. Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome

    ERIC Educational Resources Information Center

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent

    2010-01-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…

  3. Visual Processing of Verbal and Nonverbal Stimuli in Adolescents with Reading Disabilities.

    ERIC Educational Resources Information Center

    Boden, Catherine; Brodeur, Darlene A.

    1999-01-01

    A study investigated whether 32 adolescents with reading disabilities (RD) were slower at processing visual information compared to children of comparable age and reading level, or whether their deficit was specific to the written word. Adolescents with RD demonstrated difficulties in processing rapidly presented verbal and nonverbal visual…

  4. Parallel processing of general and specific threat during early stages of perception

    PubMed Central

    2016-01-01

    Differential processing of threat can consummate as early as 100 ms post-stimulus. Moreover, early perception not only differentiates threat from non-threat stimuli but also distinguishes among discrete threat subtypes (e.g. fear, disgust and anger). Combining spatial-frequency-filtered images of fear, disgust and neutral scenes with high-density event-related potentials and intracranial source estimation, we investigated the neural underpinnings of general and specific threat processing in early stages of perception. Conveyed in low spatial frequencies, fear and disgust images evoked convergent visual responses with similarly enhanced N1 potentials and dorsal visual (middle temporal gyrus) cortical activity (relative to neutral cues; peaking at 156 ms). Nevertheless, conveyed in high spatial frequencies, fear and disgust elicited divergent visual responses, with fear enhancing and disgust suppressing P1 potentials and ventral visual (occipital fusiform) cortical activity (peaking at 121 ms). Therefore, general and specific threat processing operates in parallel in early perception, with the ventral visual pathway engaged in specific processing of discrete threats and the dorsal visual pathway in general threat processing. Furthermore, selectively tuned to distinctive spatial-frequency channels and visual pathways, these parallel processes underpin dimensional and categorical threat characterization, promoting efficient threat response. These findings thus lend support to hybrid models of emotion. PMID:26412811

  5. How Dynamic Visualization Technology can Support Molecular Reasoning

    NASA Astrophysics Data System (ADS)

    Levy, Dalit

    2013-10-01

    This paper reports the results of a study aimed at exploring the advantages of dynamic visualization for the development of better understanding of molecular processes. We designed a technology-enhanced curriculum module in which high school chemistry students conduct virtual experiments with dynamic molecular visualizations of solid, liquid, and gas. They interact with the visualizations and carry out inquiry activities to make and refine connections between observable phenomena and atomic level processes related to phase change. The explanations proposed by 300 pairs of students in response to pre/post-assessment items have been analyzed using a scale for measuring the level of molecular reasoning. Results indicate that from pretest to posttest, students make progress in their level of molecular reasoning and are better able to connect intermolecular forces and phase change in their explanations. The paper presents the results through the lens of improvement patterns and the metaphor of the "ladder of molecular reasoning," and discusses how this adds to our understanding of the benefits of interacting with dynamic molecular visualizations.

  6. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory.

    PubMed

    Murty, Vishnu P; Tompary, Alexa; Adcock, R Alison; Davachi, Lila

    2017-01-18

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. Copyright © 2017 the authors 0270-6474/17/370537-09$15.00/0.

  7. Image Statistics and the Representation of Material Properties in the Visual Cortex

    PubMed Central

    Baumgartner, Elisabeth; Gegenfurtner, Karl R.

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714

  8. Image Statistics and the Representation of Material Properties in the Visual Cortex.

    PubMed

    Baumgartner, Elisabeth; Gegenfurtner, Karl R

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.

  9. Opposing dorsal/ventral stream dynamics during figure-ground segregation.

    PubMed

    Wokke, Martijn E; Scholte, H Steven; Lamme, Victor A F

    2014-02-01

    The visual system has been commonly subdivided into two segregated visual processing streams: The dorsal pathway processes mainly spatial information, and the ventral pathway specializes in object perception. Recent findings, however, indicate that different forms of interaction (cross-talk) exist between the dorsal and the ventral stream. Here, we used TMS and concurrent EEG recordings to explore these interactions between the dorsal and ventral stream during figure-ground segregation. In two separate experiments, we used repetitive TMS and single-pulse TMS to disrupt processing in the dorsal (V5/HMT⁺) and the ventral (lateral occipital area) stream during a motion-defined figure discrimination task. We presented stimuli that made it possible to differentiate between relatively low-level (figure boundary detection) from higher-level (surface segregation) processing steps during figure-ground segregation. Results show that disruption of V5/HMT⁺ impaired performance related to surface segregation; this effect was mainly found when V5/HMT⁺ was perturbed in an early time window (100 msec) after stimulus presentation. Surprisingly, disruption of the lateral occipital area resulted in increased performance scores and enhanced neural correlates of surface segregation. This facilitatory effect was also mainly found in an early time window (100 msec) after stimulus presentation. These results suggest a "push-pull" interaction in which dorsal and ventral extrastriate areas are being recruited or inhibited depending on stimulus category and task demands.

  10. The Effects of Dietary Carotenoid Supplementation and Retinal Carotenoid Accumulation on Vision-Mediated Foraging in the House Finch

    PubMed Central

    Toomey, Matthew B.; McGraw, Kevin J.

    2011-01-01

    Background For many bird species, vision is the primary sensory modality used to locate and assess food items. The health and spectral sensitivities of the avian visual system are influenced by diet-derived carotenoid pigments that accumulate in the retina. Among wild House Finches (Carpodacus mexicanus), we have found that retinal carotenoid accumulation varies significantly among individuals and is related to dietary carotenoid intake. If diet-induced changes in retinal carotenoid accumulation alter spectral sensitivity, then they have the potential to affect visually mediated foraging performance. Methodology/Principal Findings In two experiments, we measured foraging performance of house finches with dietarily manipulated retinal carotenoid levels. We tested each bird's ability to extract visually contrasting food items from a matrix of inedible distracters under high-contrast (full) and dimmer low-contrast (red-filtered) lighting conditions. In experiment one, zeaxanthin-supplemented birds had significantly increased retinal carotenoid levels, but declined in foraging performance in the high-contrast condition relative to astaxanthin-supplemented birds that showed no change in retinal carotenoid accumulation. In experiments one and two combined, we found that retinal carotenoid concentrations predicted relative foraging performance in the low- vs. high-contrast light conditions in a curvilinear pattern. Performance was positively correlated with retinal carotenoid accumulation among birds with low to medium levels of accumulation (∼0.5–1.5 µg/retina), but declined among birds with very high levels (>2.0 µg/retina). Conclusion/Significance Our results suggest that carotenoid-mediated spectral filtering enhances color discrimination, but that this improvement is traded off against a reduction in sensitivity that can compromise visual discrimination. Thus, retinal carotenoid levels may be optimized to meet the visual demands of specific behavioral tasks and light environments. PMID:21747917

  11. Visual object imagery and autobiographical memory: Object Imagers are better at remembering their personal past.

    PubMed

    Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana

    2016-01-01

    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.

  12. How the visual aspects can be crucial in reading acquisition? The intriguing case of crowding and developmental dyslexia.

    PubMed

    Gori, Simone; Facoetti, Andrea

    2015-01-14

    Developmental dyslexia (DD) is the most common neurodevelopmental disorder (about 10% of children across cultures) characterized by severe difficulties in learning to read. According to the dominant view, DD is considered a phonological processing impairment that might be linked to a cross-modal, letter-to-speech sound integration deficit. However, new theories-supported by consistent data-suggest that mild deficits in low-level visual and auditory processing can lead to DD. This evidence supports the probabilistic and multifactorial approach for DD. Among others, an interesting visual deficit that is often associated with DD is excessive visual crowding. Crowding is defined as difficulty in the ability to recognize objects when surrounded by similar items. Crowding, typically observed in peripheral vision, could be modulated by attentional processes. The direct consequence of stronger crowding on reading is the inability to recognize letters when they are surrounded by other letters. This problem directly translates to reading at a slower speed and being more prone to making errors while reading. Our aim is to review the literature supporting the important role of crowding in DD. Moreover, we are interested in proposing new possible studies in order to clarify whether the observed excessive crowding could be a cause rather than an effect of DD. Finally, we also suggest possible remediation and even prevention programs that could be based on reducing the crowding in children with or at risk for DD without involving any phonological or orthographic training. © 2015 ARVO.

  13. Eye movements reveal the time-course of anticipating behaviour based on complex, conflicting desires.

    PubMed

    Ferguson, Heather J; Breheny, Richard

    2011-05-01

    The time-course of representing others' perspectives is inconclusive across the currently available models of ToM processing. We report two visual-world studies investigating how knowledge about a character's basic preferences (e.g. Tom's favourite colour is pink) and higher-order desires (his wish to keep this preference secret) compete to influence online expectations about subsequent behaviour. Participants' eye movements around a visual scene were tracked while they listened to auditory narratives. While clear differences in anticipatory visual biases emerged between conditions in Experiment 1, post-hoc analyses testing the strength of the relevant biases suggested a discrepancy in the time-course of predicting appropriate referents within the different contexts. Specifically, predictions to the target emerged very early when there was no conflict between the character's basic preferences and higher-order desires, but appeared to be relatively delayed when comprehenders were provided with conflicting information about that character's desire to keep a secret. However, a second experiment demonstrated that this apparent 'cognitive cost' in inferring behaviour based on higher-order desires was in fact driven by low-level features between the context sentence and visual scene. Taken together, these results suggest that healthy adults are able to make complex higher-order ToM inferences without the need to call on costly cognitive processes. Results are discussed relative to previous accounts of ToM and language processing. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.

    PubMed

    Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L

    2013-12-01

    Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Composite Measures of Individual and Area-Level Socio-Economic Status Are Associated with Visual Impairment in Singapore

    PubMed Central

    Wah, Win; Earnest, Arul; Sabanayagam, Charumathi; Cheng, Ching-Yu; Ong, Marcus Eng Hock; Wong, Tien Y.; Lamoureux, Ecosse L.

    2015-01-01

    Purpose To investigate the independent relationship of individual- and area-level socio-economic status (SES) with the presence and severity of visual impairment (VI) in an Asian population. Methods Cross-sectional data from 9993 Chinese, Malay and Indian adults aged 40–80 years who participated in the Singapore Epidemiology of eye Diseases (2004–2011) in Singapore. Based on the presenting visual acuity (PVA) in the better-seeing eye, VI was categorized into normal vision (logMAR≤0.30), low vision (logMAR>0.30<1.00), and blindness (logMAR≥1.00). Any VI was defined as low vision/blindness in the PVA of better-seeing eye. Individual-level low-SES was defined as a composite of primary-level education, monthly income<2000 SGD and residing in 1 or 2-room public apartment. An area-level SES was assessed using a socio-economic disadvantage index (SEDI), created using 12 variables from the 2010 Singapore census. A high SEDI score indicates a relatively poor SES. Associations between SES measures and presence and severity of VI were examined using multi-level, mixed-effects logistic and multinomial regression models. Results The age-adjusted prevalence of any VI was 19.62% (low vision = 19%, blindness = 0.62%). Both individual- and area-level SES were positively associated with any VI and low vision after adjusting for confounders. The odds ratio (95% confidence interval) of any VI was 2.11(1.88–2.37) for low-SES and 1.07(1.02–1.13) per 1 standard deviation increase in SEDI. When stratified by unilateral/bilateral categories, while low SES showed significant associations with all categories, SEDI showed a significant association with bilateral low vision only. The association between low SES and any VI remained significant among all age, gender and ethnic sub-groups. Although a consistent positive association was observed between area-level SEDI and any VI, the associations were significant among participants aged 40–65 years and male. Conclusion In this community-based sample of Asian adults, both individual- and area-level SES were independently associated with the presence and severity of VI. PMID:26555141

  16. Young children's coding and storage of visual and verbal material.

    PubMed

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  17. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  18. Visual Local and Global Processing in Low-Functioning Deaf Individuals with and without Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Maljaars, J. P. W.; Noens, I. L. J.; Scholte, E. M.; Verpoorten, R. A. W.; van Berckelaer-Onnes, I. A.

    2011-01-01

    Background: The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the…

  19. Visual Information Processing Deficits as Biomarkers of Vulnerability to Schizophrenia: An Event-Related Potential Study in Schizotypy

    ERIC Educational Resources Information Center

    Koychev, Ivan; El-Deredy, Wael; Haenschel, Corinna; Deakin, John Francis William

    2010-01-01

    We aimed to clarify the importance of early visual processing deficits for the formation of cognitive deficits in the schizophrenia spectrum. We carried out an event-related potential (ERP) study using a computerised delayed matching to sample working memory (WM) task on a sample of volunteers with high and low scores on the Schizotypal…

  20. Seeing meaning in action: a bidirectional link between visual perspective and action identification level.

    PubMed

    Libby, Lisa K; Shaeffer, Eric M; Eibach, Richard P

    2009-11-01

    Actions do not have inherent meaning but rather can be interpreted in many ways. The interpretation a person adopts has important effects on a range of higher order cognitive processes. One dimension on which interpretations can vary is the extent to which actions are identified abstractly--in relation to broader goals, personal characteristics, or consequences--versus concretely, in terms of component processes. The present research investigated how visual perspective (own 1st-person vs. observer's 3rd-person) in action imagery is related to action identification level. A series of experiments measured and manipulated visual perspective in mental and photographic images to test the connection with action identification level. Results revealed a bidirectional causal relationship linking 3rd-person images and abstract action identifications. These findings highlight the functional role of visual imagery and have implications for understanding how perspective is involved in action perception at the social, cognitive, and neural levels. Copyright 2009 APA

  1. Basic abnormalities in visual processing affect face processing at an early age in autism spectrum disorder.

    PubMed

    Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal

    2010-12-15

    A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  2. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-03-01

    Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.

  3. JPEG XS, a new standard for visually lossless low-latency lightweight image compression

    NASA Astrophysics Data System (ADS)

    Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.

    2017-09-01

    JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.

  4. Bells Test: Are there differences in performance between adult groups aged 40-59 and 60-75?

    PubMed Central

    Paiva, Silvio Cesar Escovar; Viapiana, Vanisa Fante; Cardoso, Caroline de Oliveira; Fonseca, Rochele Paz

    2017-01-01

    Objective To verify whether differences exist between groups of Brazilian adults aged 40-59 and 60-75 in respective performance on the Bells Test, given the dearth of literature investigating the relationship between focused visual attention and the age factor. Methods Eighty-four neurologically healthy adults (half aged 40-59 and half 60-75) with high educational level (40-59 years group: M=17.75 years' education; SD=4.00; 60-75 years group: M=15.85 years education; SD=3.19) were assessed using the Bells Test. Data on accuracy and processing speed were compared between groups by ANCOVA, controlled for the covariates education and frequency of reading and writing habits. Results There were no significant differences between the age groups. Conclusion It is suggested that aging influences sustained and focused attention and speed processing after 75 years of age on visual cancellation paradigms, when executive and attentional changes tend to be more marked. Further studies should investigate healthy older and oldest-old adults, as well as groups with low and intermediate educational backgrounds. In addition, Brazilian clinical populations should also be characterized, particularly those with neurological disorders that might have visual hemineglect. PMID:29213492

  5. Lexical Familiarity and Processing Efficiency: Individual Differences in Naming, Lexical Decision, and Semantic Categorization

    PubMed Central

    Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.

    2012-01-01

    College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087

  6. Activity in early visual areas predicts interindividual differences in binocular rivalry dynamics

    PubMed Central

    Yamashiro, Hiroyuki; Mano, Hiroaki; Umeda, Masahiro; Higuchi, Toshihiro; Saiki, Jun

    2013-01-01

    When dissimilar images are presented to the two eyes, binocular rivalry (BR) occurs, and perception alternates spontaneously between the images. Although neural correlates of the oscillating perception during BR have been found in multiple sites along the visual pathway, the source of BR dynamics is unclear. Psychophysical and modeling studies suggest that both low- and high-level cortical processes underlie BR dynamics. Previous neuroimaging studies have demonstrated the involvement of high-level regions by showing that frontal and parietal cortices responded time locked to spontaneous perceptual alternation in BR. However, a potential contribution of early visual areas to BR dynamics has been overlooked, because these areas also responded to the physical stimulus alternation mimicking BR. In the present study, instead of focusing on activity during perceptual switches, we highlighted brain activity during suppression periods to investigate a potential link between activity in human early visual areas and BR dynamics. We used a strong interocular suppression paradigm called continuous flash suppression to suppress and fluctuate the visibility of a probe stimulus and measured retinotopic responses to the onset of the invisible probe using functional MRI. There were ∼130-fold differences in the median suppression durations across 12 subjects. The individual differences in suppression durations could be predicted by the amplitudes of the retinotopic activity in extrastriate visual areas (V3 and V4v) evoked by the invisible probe. Weaker responses were associated with longer suppression durations. These results demonstrate that retinotopic representations in early visual areas play a role in the dynamics of perceptual alternations during BR. PMID:24353304

  7. Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury

    NASA Astrophysics Data System (ADS)

    Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.

    2008-02-01

    Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.

  8. A unified selection signal for attention and reward in primary visual cortex.

    PubMed

    Stănişor, Liviu; van der Togt, Chris; Pennartz, Cyriel M A; Roelfsema, Pieter R

    2013-05-28

    Stimuli associated with high rewards evoke stronger neuronal activity than stimuli associated with lower rewards in many brain regions. It is not well understood how these reward effects influence activity in sensory cortices that represent low-level stimulus features. Here, we investigated the effects of reward information in the primary visual cortex (area V1) of monkeys. We found that the reward value of a stimulus relative to the value of other stimuli is a good predictor of V1 activity. Relative value biases the competition between stimuli, just as has been shown for selective attention. The neuronal latency of this reward value effect in V1 was similar to the latency of attentional influences. Moreover, V1 neurons with a strong value effect also exhibited a strong attention effect, which implies that relative value and top-down attention engage overlapping, if not identical, neuronal selection mechanisms. Our findings demonstrate that the effects of reward value reach down to the earliest sensory processing levels of the cerebral cortex and imply that theories about the effects of reward coding and top-down attention on visual representations should be unified.

  9. Video-Game Play Induces Plasticity in the Visual System of Adults with Amblyopia

    PubMed Central

    Li, Roger W.; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M.

    2011-01-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15–61 y; visual acuity: 20/25–20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40–80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other cortical dysfunctions. Trial Registration ClinicalTrials.gov NCT01223716 PMID:21912514

  10. Video-game play induces plasticity in the visual system of adults with amblyopia.

    PubMed

    Li, Roger W; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M

    2011-08-01

    Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40-80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other cortical dysfunctions. ClinicalTrials.gov NCT01223716.

  11. Psychological Adjustment and Levels of Self Esteem in Children with Visual-Motor Integration Difficulties Influences the Results of a Randomized Intervention Trial

    ERIC Educational Resources Information Center

    Lahav, Orit; Apter, Alan; Ratzon, Navah Z.

    2013-01-01

    This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low…

  12. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans.

    PubMed

    Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter

    2017-01-01

    Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.

  13. Differential Contribution of Low- and High-level Image Content to Eye Movements in Monkeys and Humans

    PubMed Central

    Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter

    2017-01-01

    Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512

  14. Visual Communications And Image Processing

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell; Tzou, Kou-Hu

    1989-07-01

    This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.

  15. Rarefied flow diagnostics using pulsed high-current electron beams

    NASA Technical Reports Server (NTRS)

    Wojcik, Radoslaw M.; Schilling, John H.; Erwin, Daniel A.

    1990-01-01

    The use of high-current short-pulse electron beams in low-density gas flow diagnostics is introduced. Efficient beam propagation is demonstrated for pressure up to 300 microns. The beams, generated by low-pressure pseudospark discharges in helium, provide extremely high fluorescence levels, allowing time-resolved visualization in high-background environments. The fluorescence signal frequency is species-dependent, allowing instantaneous visualization of mixing flowfields.

  16. Functional Magnetic Resonance Imaging to Assess the Neurobehavioral Impact of Dysphotopsia with Multifocal Intraocular Lenses.

    PubMed

    Rosa, Andreia M; Miranda, Ângela C; Patrício, Miguel; McAlinden, Colm; Silva, Fátima L; Murta, Joaquim N; Castelo-Branco, Miguel

    2017-09-01

    To investigate the association between dysphotopsia and neural responses in visual and higher-level cortical regions in patients who recently received multifocal intraocular lens (IOL) implants. Cross-sectional study. Thirty patients 3 to 4 weeks after bilateral cataract surgery with diffractive IOL implantation and 15 age- and gender-matched control subjects. Functional magnetic resonance imaging (fMRI) was performed when participants viewed low-contrast grating stimuli. A light source surrounded the stimuli in half of the runs to induce disability glare. Visual acuity, wavefront analysis, Quality of Vision (QoV) questionnaire, and psychophysical assessment were performed. Cortical activity (blood oxygen level dependent [BOLD] signal) in the primary visual cortex and in higher-level brain areas, including the attention network. When viewing low-contrast stimuli under glare, patients showed significant activation of the effort-related attention network in the early postoperative period, involving the frontal, middle frontal, parietal frontal, and postcentral gyrus (multisubject random-effects general linear model (GLM), P < 0.03). In contrast, controls showed only relative deactivation (due to lower visibility) of visual areas (occipital lobe and middle occipital gyrus, P < 0.03). Patients also had relatively stronger recruitment of cortical areas involved in learning (anterior cingulate gyrus), task planning, and solving (caudate body). Patients reporting greater symptoms induced by dysphotic symptoms showed significantly increased activity in several regions in frontoparietal circuits, as well as cingulate gyrus and caudate nucleus (q < 0.05). We found no correlation between QoV questionnaire scores and optical properties (total and higher order aberration, modulation transfer function, and Strehl ratio). This study shows the association between patient-reported subjective difficulties and fMRI outcomes, independent of optical parameters and psychophysical performance. The increased activity of cortical areas dedicated to attention (frontoparietal circuits), to learning and cognitive control (cingulate), and to task goals (caudate) likely represents the beginning of the neuroadaptation process to multifocal IOLs. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  17. Early visual processing is enhanced in the midluteal phase of the menstrual cycle.

    PubMed

    Lusk, Bethany R; Carr, Andrea R; Ranson, Valerie A; Bryant, Richard A; Felmingham, Kim L

    2015-12-01

    Event-related potential (ERP) studies have revealed an early attentional bias in processing unpleasant emotional images in women. Recent neuroimaging data suggests there are significant differences in cortical emotional processing according to menstrual phase. This study examined the impact of menstrual phase on visual emotional processing in women compared to men. ERPs were recorded from 28 early follicular women, 29 midluteal women, and 27 men while they completed a passive viewing task of neutral and low- and high- arousing pleasant and unpleasant images. There was a significant effect of menstrual phase in early visual processing, as midluteal women displayed significantly greater P1 amplitude at occipital regions to all visual images compared to men. Both midluteal and early follicular women displayed larger N1 amplitudes than men (although this only reached significance for the midluteal group) to the visual images. No sex or menstrual phase differences were apparent in later N2, P3, or LPP. A condition effect demonstrated greater P3 and LPP amplitude to highly-arousing unpleasant images relative to all other stimuli conditions. These results indicate that women have greater early automatic visual processing compared to men, and suggests that this effect is particularly strong in women in the midluteal phase at the earliest stage of visual attention processing. Our findings highlight the importance of considering menstrual phase when examining sex differences in the cortical processing of visual stimuli. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Abnormalities in early visual processes are linked to hypersociability and atypical evaluation of facial trustworthiness: An ERP study with Williams syndrome.

    PubMed

    Shore, Danielle M; Ng, Rowena; Bellugi, Ursula; Mills, Debra L

    2017-10-01

    Accurate assessment of trustworthiness is fundamental to successful and adaptive social behavior. Initially, people assess trustworthiness from facial appearance alone. These assessments then inform critical approach or avoid decisions. Individuals with Williams syndrome (WS) exhibit a heightened social drive, especially toward strangers. This study investigated the temporal dynamics of facial trustworthiness evaluation in neurotypic adults (TD) and individuals with WS. We examined whether differences in neural activity during trustworthiness evaluation may explain increased approach motivation in WS compared to TD individuals. Event-related potentials were recorded while participants appraised faces previously rated as trustworthy or untrustworthy. TD participants showed increased sensitivity to untrustworthy faces within the first 65-90 ms, indexed by the negative-going rise of the P1 onset (oP1). The amplitude of the oP1 difference to untrustworthy minus trustworthy faces was correlated with lower approachability scores. In contrast, participants with WS showed increased N170 amplitudes to trustworthy faces. The N170 difference to low-high-trust faces was correlated with low approachability in TD and high approachability in WS. The findings suggest that hypersociability associated with WS may arise from abnormalities in the timing and organization of early visual brain activity during trustworthiness evaluation. More generally, the study provides support for the hypothesis that impairments in low-level perceptual processes can have a cascading effect on social cognition.

  19. How Configural Is the Configural Superiority Effect? A Neuroimaging Investigation of Emergent Features in Visual Cortex

    PubMed Central

    Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.

    2017-01-01

    The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924

  20. System identification and sensorimotor determinants of flight maneuvers in an insect

    NASA Astrophysics Data System (ADS)

    Sponberg, Simon; Hall, Robert; Roth, Eatai

    Locomotor maneuvers are inherently closed-loop processes. They are generally characterized by the integration of multiple sensory inputs and adaptation or learning over time. To probe sensorimotor processing we take a system identification approach treating the underlying physiological systems as dynamic processes and altering the feedback topology in experiment and analysis. As a model system, we use agile hawk moths (Manduca sexta), which feed from real and robotic flowers while hovering in mid air. Moths rely on vision and mechanosensation to track floral targets and can do so at exceptionally low luminance levels despite hovering being a mechanically unstable behavior that requires neural feedback to stabilize. By altering the sensory environment and placing mechanical and visual signals in conflict we show a surprisingly simple linear summation of visual and mechanosensation produces a generative prediction of behavior to novel stimuli. Tracking performance is also limited more by the mechanics of flight than the magnitude of the sensory cue. A feedback systems approach to locomotor control results in new insights into how behavior emerges from the interaction of nonlinear physiological systems.

  1. Effective connectivity in the neural network underlying coarse-to-fine categorization of visual scenes. A dynamic causal modeling study.

    PubMed

    Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole

    2015-10-01

    According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Are children with low vision adapted to the visual environment in classrooms of mainstream schools?

    PubMed

    Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi

    2018-02-01

    The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. The medical records of 110 children (5-17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60-6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools.

  3. Preserved figure-ground segregation and symmetry perception in visual neglect.

    PubMed

    Driver, J; Baylis, G C; Rafal, R D

    1992-11-05

    A central controversy in current research on visual attention is whether figures are segregated from their background preattentively, or whether attention is first directed to unstructured regions of the image. Here we present neurological evidence for the former view from studies of a brain-injured patient with visual neglect. His attentional impairment arises after normal segmentation of the image into figures and background has taken place. Our results indicate that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation.

  4. TMS over the right precuneus reduces the bilateral field advantage in visual short term memory capacity.

    PubMed

    Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A

    2015-01-01

    Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Visual perceptual load induces inattentional deafness.

    PubMed

    Macdonald, James S P; Lavie, Nilli

    2011-08-01

    In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.

  6. Visual processing in anorexia nervosa and body dysmorphic disorder: similarities, differences, and future research directions

    PubMed Central

    Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.

    2013-01-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196

  7. Dissociating functional brain networks by decoding the between-subject variability

    PubMed Central

    Seghier, Mohamed L.; Price, Cathy J.

    2009-01-01

    In this study we illustrate how the functional networks involved in a single task (e.g. the sensory, cognitive and motor components) can be segregated without cognitive subtractions at the second-level. The method used is based on meaningful variability in the patterns of activation between subjects with the assumption that regions belonging to the same network will have comparable variations from subject to subject. fMRI data were collected from thirty nine healthy volunteers who were asked to indicate with a button press if visually presented words were semantically related or not. Voxels were classified according to the similarity in their patterns of between-subject variance using a second-level unsupervised fuzzy clustering algorithm. The results were compared to those identified by cognitive subtractions of multiple conditions tested in the same set of subjects. This illustrated that the second-level clustering approach (on activation for a single task) was able to identify the functional networks observed using cognitive subtractions (e.g. those associated with vision, semantic associations or motor processing). In addition the fuzzy clustering approach revealed other networks that were not dissociated by the cognitive subtraction approach (e.g. those associated with high- and low-level visual processing and oculomotor movements). We discuss the potential applications of our method which include the identification of “hidden” or unpredicted networks as well as the identification of systems level signatures for different subgroupings of clinical and healthy populations. PMID:19150501

  8. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  9. Visual perception as retrospective Bayesian decoding from high- to low-level features

    PubMed Central

    Ding, Stephanie; Cueva, Christopher J.; Tsodyks, Misha; Qian, Ning

    2017-01-01

    When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding. PMID:29073108

  10. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2017-05-01

    Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  12. Figure-ground activity in primary visual cortex (V1) of the monkey matches the speed of behavioral response.

    PubMed

    Supèr, Hans; Spekreijse, Henk; Lamme, Victor A F

    2003-06-26

    To look at an object its position in the visual scene has to be localized and subsequently appropriate oculo-motor behavior needs to be initiated. This kind of behavior is largely controlled by the cortical executive system, such as the frontal eye field. In this report, we analyzed neural activity in the visual cortex in relation to oculo-motor behavior. We show that in a figure-ground detection task, the strength of late modulated activity in the primary visual cortex correlates with the saccade latency. We propose that this may indicate that the variability of reaction times in the detection of a visual stimulus is reflected in low-level visual areas as well as in high-level areas.

  13. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Conceptual analysis of Physiology of vision in Ayurveda

    PubMed Central

    Balakrishnan, Praveen; Ashwini, M. J.

    2014-01-01

    The process by which the world outside is seen is termed as visual process or physiology of vision. There are three phases in this visual process: phase of refraction of light, phase of conversion of light energy into electrical impulse and finally peripheral and central neurophysiology. With the advent of modern instruments step by step biochemical changes occurring at each level of the visual process has been deciphered. Many investigations have emerged to track these changes and helping to diagnose the exact nature of the disease. Ayurveda has described this physiology of vision based on the functions of vata and pitta. Philosophical textbook of ayurveda, Tarka Sangraha, gives certain basics facts of visual process. This article discusses the second and third phase of visual process. Step by step analysis of the visual process through the spectacles of ayurveda amalgamated with the basics of philosophy from Tarka Sangraha has been analyzed critically to generate a concrete idea regarding the physiology and hence thereby interpret the pathology on the grounds of ayurveda based on the investigative reports. PMID:25336853

  15. Chromatic information and feature detection in fast visual analysis

    DOE PAGES

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...

    2016-08-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  16. Chromatic information and feature detection in fast visual analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  17. Visual perception and imagery: a new molecular hypothesis.

    PubMed

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.

  18. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours. PMID:26963919

  19. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    PubMed

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours.

  20. Global and local processing near the left and right hands

    PubMed Central

    Langerak, Robin M.; La Mantia, Carina L.; Brown, Liana E.

    2013-01-01

    Visual targets can be processed more quickly and reliably when a hand is placed near the target. Both unimodal and bimodal representations of hands are largely lateralized to the contralateral hemisphere, and since each hemisphere demonstrates specialized cognitive processing, it is possible that targets appearing near the left hand may be processed differently than targets appearing near the right hand. The purpose of this study was to determine whether visual processing near the left and right hands interacts with hemispheric specialization. We presented hierarchical-letter stimuli (e.g., small characters used as local elements to compose large characters at the global level) near the left or right hands separately and instructed participants to discriminate the presence of target letters (X and O) from non-target letters (T and U) at either the global or local levels as quickly as possible. Targets appeared at either the global or local level of the display, at both levels, or were absent from the display; participants made foot-press responses. When discriminating target presence at the global level, participants responded more quickly to stimuli presented near the left hand than near either the right hand or in the no-hand condition. Hand presence did not influence target discrimination at the local level. Our interpretation is that left-hand presence may help participants discriminate global information, a right hemisphere (RH) process, and that the left hand may influence visual processing in a way that is distinct from the right hand. PMID:24194725

  1. Predicting successful tactile mapping of virtual objects.

    PubMed

    Brayda, Luca; Campus, Claudio; Gori, Monica

    2013-01-01

    Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.

  2. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  3. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    PubMed

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  4. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2012-10-01

    visual nystagmus much more robust. Because the absolute gaze is not measured in our paradigm (this would require a gaze calibration, involving...the dots were also drifting to the right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event...for automated 5 Reflex Stimulus Functions Visual Nystagmus luminance grating low-level motion equiluminant grating color vision contrast gratings at 3

  5. Languages on the screen: is film comprehension related to the viewers' fluency level and to the language in the subtitles?

    PubMed

    Lavaur, Jean-Marc; Bairstow, Dominique

    2011-12-01

    This research aimed at studying the role of subtitling in film comprehension. It focused on the languages in which the subtitles are written and on the participants' fluency levels in the languages presented in the film. In a preliminary part of the study, the most salient visual and dialogue elements of a short sequence of an English film were extracted by the means of a free recall task after showing two versions of the film (first a silent, then a dubbed-into-French version) to native French speakers. This visual and dialogue information was used in the setting of a questionnaire concerning the understanding of the film presented in the main part of the study, in which other French native speakers with beginner, intermediate, or advanced fluency levels in English were shown one of three versions of the film used in the preliminary part. Respectively, these versions had no subtitles or they included either English or French subtitles. The results indicate a global interaction between all three factors in this study: For the beginners, visual processing dropped from the version without subtitles to that with English subtitles, and even more so if French subtitles were provided, whereas the effect of film version on dialogue comprehension was the reverse. The advanced participants achieved higher comprehension for both types of information with the version without subtitles, and dialogue information processing was always better than visual information processing. The intermediate group similarly processed dialogues in a better way than visual information, but was not affected by film version. These results imply that, depending on the viewers' fluency levels, the language of subtitles can have different effects on movie information processing.

  6. Basic level category structure emerges gradually across human ventral visual cortex.

    PubMed

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2015-07-01

    Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

  7. Visual perception of ADHD children with sensory processing disorder.

    PubMed

    Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi

    2014-04-01

    The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.

  8. Global Sensory Qualities and Aesthetic Experience in Music.

    PubMed

    Brattico, Pauli; Brattico, Elvira; Vuust, Peter

    2017-01-01

    A well-known tradition in the study of visual aesthetics holds that the experience of visual beauty is grounded in global computational or statistical properties of the stimulus, for example, scale-invariant Fourier spectrum or self-similarity. Some approaches rely on neural mechanisms, such as efficient computation, processing fluency, or the responsiveness of the cells in the primary visual cortex. These proposals are united by the fact that the contributing factors are hypothesized to be global (i.e., they concern the percept as a whole), formal or non-conceptual (i.e., they concern form instead of content), computational and/or statistical, and based on relatively low-level sensory properties. Here we consider that the study of aesthetic responses to music could benefit from the same approach. Thus, along with local features such as pitch, tuning, consonance/dissonance, harmony, timbre, or beat, also global sonic properties could be viewed as contributing toward creating an aesthetic musical experience. Several such properties are discussed and their neural implementation is reviewed in the light of recent advances in neuroaesthetics.

  9. Distinct Feedforward and Feedback Effects of Microstimulation in Visual Cortex Reveal Neural Mechanisms of Texture Segregation.

    PubMed

    Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R

    2017-07-05

    The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.

    PubMed

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.

  11. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  12. A neuroimaging study of conflict during word recognition.

    PubMed

    Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F

    2010-08-04

    Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.

  13. How task demands shape brain responses to visual food cues.

    PubMed

    Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme

    2017-06-01

    Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  15. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  16. Visual Dialect: Ethnovisual and Sociovisual Elements of Design in Public Service Communication.

    ERIC Educational Resources Information Center

    Schiffman, Carole B.

    Graphic design is a form of communication by which visual messages are conveyed to a viewer. Audience needs and views must steer the design process when constructing public service visual messages. Well-educated people may be better able to comprehend visuals which require some level of interpretation or extend beyond their world view. Public…

  17. The differential contributions of visual imagery constructs on autobiographical thinking.

    PubMed

    Aydin, Cagla

    2018-02-01

    There is a growing theoretical and empirical consensus on the central role of visual imagery in autobiographical memory. However, findings from studies that explore how individual differences in visual imagery are reflected on autobiographical thinking do not present a coherent story. One reason for the mixed findings was suggested to be the treatment of visual imagery as an undifferentiated construct while evidence shows that there is more than one type of visual imagery. The present study investigates the relative contributions of different imagery constructs; namely, object and spatial imagery, on autobiographical memory processes. Additionally, it explores whether a similar relation extends to imagining the future. The results indicate that while object imagery was significantly correlated with several phenomenological characteristics, such as the level of sensory and perceptual details for past events - but not for future events - spatial imagery predicted the level of episodic specificity for both past and future events. We interpret these findings as object imagery being recruited in tasks of autobiographical memory that employ reflective processes while spatial imagery is engaged during direct retrieval of event details. Implications for the role of visual imagery in autobiographical thinking processes are discussed.

  18. Innovative Pressure Sensor Platform and Its Integration with an End-User Application

    PubMed Central

    Flores-Caballero, Antonio; Copaci, Dorin; Blanco, María Dolores; Moreno, Luis; Herrán, Jaime; Fernández, Iván; Ochoteco, Estíbaliz; Cabañero, German; Grande, Hans

    2014-01-01

    This paper describes the fully integration of an innovative and low-cost pressure sensor sheet based on a bendable and printed electronics technology. All integration stages are covered, from most low-level functional system, like physical analog sensor data acquisition, followed by embedded data processing, to end user interactive visual application. Data acquisition embedded software and hardware was developed using a Rapid Control Prototyping (RCP). Finally, after first electronic prototype successful testing, a Taylor-made electronics was developed, reducing electronics volume to 3.5 cm × 6 cm × 2 cm with a maximum power consumption of 765 mW for both electronics and pressure sensor sheet. PMID:24922455

  19. Visuospatial anatomy comprehension: the role of spatial visualization ability and problem-solving strategies.

    PubMed

    Nguyen, Ngan; Mulla, Ali; Nelson, Andrew J; Wilson, Timothy D

    2014-01-01

    The present study explored the problem-solving strategies of high- and low-spatial visualization ability learners on a novel spatial anatomy task to determine whether differences in strategies contribute to differences in task performance. The results of this study provide further insights into the processing commonalities and differences among learners beyond the classification of spatial visualization ability alone, and help elucidate what, if anything, high- and low-spatial visualization ability learners do differently while solving spatial anatomy task problems. Forty-two students completed a standardized measure of spatial visualization ability, a novel spatial anatomy task, and a questionnaire involving personal self-analysis of the processes and strategies used while performing the spatial anatomy task. Strategy reports revealed that there were different ways students approached answering the spatial anatomy task problems. However, chi-square test analyses established that differences in problem-solving strategies did not contribute to differences in task performance. Therefore, underlying spatial visualization ability is the main source of variation in spatial anatomy task performance, irrespective of strategy. In addition to scoring higher and spending less time on the anatomy task, participants with high spatial visualization ability were also more accurate when solving the task problems. © 2013 American Association of Anatomists.

  20. Biocharts: a visual formalism for complex biological systems

    PubMed Central

    Kugler, Hillel; Larjo, Antti; Harel, David

    2010-01-01

    We address one of the central issues in devising languages, methods and tools for the modelling and analysis of complex biological systems, that of linking high-level (e.g. intercellular) information with lower-level (e.g. intracellular) information. Adequate ways of dealing with this issue are crucial for understanding biological networks and pathways, which typically contain huge amounts of data that continue to grow as our knowledge and understanding of a system increases. Trying to comprehend such data using the standard methods currently in use is often virtually impossible. We propose a two-tier compound visual language, which we call Biocharts, that is geared towards building fully executable models of biological systems. One of the main goals of our approach is to enable biologists to actively participate in the computational modelling effort, in a natural way. The high-level part of our language is a version of statecharts, which have been shown to be extremely successful in software and systems engineering. The statecharts can be combined with any appropriately well-defined language (preferably a diagrammatic one) for specifying the low-level dynamics of the pathways and networks. We illustrate the language and our general modelling approach using the well-studied process of bacterial chemotaxis. PMID:20022895

Top