ERIC Educational Resources Information Center
Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.
2012-01-01
An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…
Global processing takes time: A meta-analysis on local-global visual processing in ASD.
Van der Hallen, Ruth; Evers, Kris; Brewaeys, Katrien; Van den Noortgate, Wim; Wagemans, Johan
2015-05-01
What does an individual with autism spectrum disorder (ASD) perceive first: the forest or the trees? In spite of 30 years of research and influential theories like the weak central coherence (WCC) theory and the enhanced perceptual functioning (EPF) account, the interplay of local and global visual processing in ASD remains only partly understood. Research findings vary in indicating a local processing bias or a global processing deficit, and often contradict each other. We have applied a formal meta-analytic approach and combined 56 articles that tested about 1,000 ASD participants and used a wide range of stimuli and tasks to investigate local and global visual processing in ASD. Overall, results show no enhanced local visual processing nor a deficit in global visual processing. Detailed analysis reveals a difference in the temporal pattern of the local-global balance, that is, slow global processing in individuals with ASD. Whereas task-dependent interaction effects are obtained, gender, age, and IQ of either participant groups seem to have no direct influence on performance. Based on the overview of the literature, suggestions are made for future research. (c) 2015 APA, all rights reserved).
Perception of shapes targeting local and global processes in autism spectrum disorders.
Grinter, Emma J; Maybery, Murray T; Pellicano, Elizabeth; Badcock, Johanna C; Badcock, David R
2010-06-01
Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and global form processing ability. Within the visual domain, radial frequency (RF) patterns - shapes formed by sinusoidally varying the radius of a circle to add 'bumps' of a certain number to a circle - can be used to examine local and global form perception. Typically developing children and children with an ASD discriminated between circles and RF patterns that are processed either locally (RF24) or globally (RF3). Children with an ASD required greater shape deformation to identify RF3 shapes compared to typically developing children, consistent with difficulty in global processing in the ventral stream. No group difference was observed for RF24 shapes, suggesting intact local ventral-stream processing. These outcomes support the position that a deficit in global visual processing is present in ASDs, consistent with the notion of Weak Central Coherence.
Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-09-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.
Visual arts training is linked to flexible attention to local and global levels of visual stimuli.
Chamberlain, Rebecca; Wagemans, Johan
2015-10-01
Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.
Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-01-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…
Perception of Shapes Targeting Local and Global Processes in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Grinter, Emma J.; Maybery, Murray T.; Pellicano, Elizabeth; Badcock, Johanna C.; Badcock, David R.
2010-01-01
Background: Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and…
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Behavioral and Physiological Findings of Gender Differences in Global-Local Visual Processing
ERIC Educational Resources Information Center
Roalf, David; Lowery, Natasha; Turetsky, Bruce I.
2006-01-01
Hemispheric asymmetries in global-local visual processing are well-established, as are gender differences in cognition. Although hemispheric asymmetry presumably underlies gender differences in cognition, the literature on gender differences in global-local processing is sparse. We employed event related brain potential (ERP) recordings during…
Aiello, Marilena; Merola, Sheila; Lasaponara, Stefano; Pinto, Mario; Tomaiuolo, Francesco; Doricchi, Fabrizio
2018-01-31
The possibility of allocating attentional resources to the "global" shape or to the "local" details of pictorial stimuli helps visual processing. Investigations with hierarchical Navon letters, that are large "global" letters made up of small "local" ones, consistently demonstrate a right hemisphere advantage for global processing and a left hemisphere advantage for local processing. Here we investigated how the visual and phonological features of the global and local components of Navon letters influence these hemispheric advantages. In a first study in healthy participants, we contrasted the hemispheric processing of hierarchical letters with global and local items competing for response selection, to the processing of hierarchical letters in which a letter, a false-letter conveying no phonological information or a geometrical shape presented at the unattended level did not compete for response selection. In a second study, we investigated the hemispheric processing of hierarchical stimuli in which global and local letters were both visually and phonologically congruent (e.g. large uppercase G made of smaller uppercase G), visually incongruent and phonologically congruent (e.g. large uppercase G made of small lowercase g) or visually incongruent and phonologically incongruent (e.g. large uppercase G made of small lowercase or uppercase M). In a third study, we administered the same tasks to a right brain damaged patient with a lesion involving pre-striate areas engaged by global processing. The results of the first two experiments showed that the global abilities of the left hemisphere are limited because of its strong susceptibility to interference from local letters even when these are irrelevant to the task. Phonological features played a crucial role in this interference because the interference was entirely maintained also when letters at the global and local level were presented in different uppercase vs. lowercase formats. In contrast, when local features conveyed no phonological information, the left hemisphere showed preserved global processing abilities. These findings were supported by the study of the right brain damaged patient. These results offer a new look at the hemispheric dominance in the attentional processing of the global and local levels of hierarchical stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Higher levels of depression are associated with reduced global bias in visual processing.
de Fockert, Jan W; Cooper, Andrew
2014-04-01
Negative moods have been associated with a tendency to prioritise local details in visual processing. The current study investigated the relation between depression and visual processing using the Navon task, a standard task of local and global processing. In the Navon task, global stimuli are presented that are made up of many local parts, and the participants are instructed to report the identity of either a global or a local target shape. Participants with a low self-reported level of depression showed evidence of the expected global processing bias, and were significantly faster at responding to the global, compared with the local level. By contrast, no such difference was observed in participants with high levels of depression. The reduction of the global bias associated with high levels of depression was only observed in the overall speed of responses to global (versus local) targets, and not in the level of interference produced by the global (versus local) distractors. These results are in line with recent findings of a dissociation between local/global processing bias and interference from local/global distractors, and support the claim that depression is associated with a reduction in the tendency to prioritise global-level processing.
Electromagnetic Evidence of Altered Visual Processing in Autism
ERIC Educational Resources Information Center
Neumann, Nicola; Dubischar-Krivec, Anna M.; Poustka, Fritz; Birbaumer, Niels; Bolte, Sven; Braun, Christoph
2011-01-01
Individuals with autism spectrum disorder (ASD) demonstrate intact or superior local processing of visual-spatial tasks. We investigated the hypothesis that in a disembedding task, autistic individuals exhibit a more local processing style than controls, which is reflected by altered electromagnetic brain activity in response to embedded stimuli…
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
Bölte, S; Hubl, D; Dierks, T; Holtmann, M; Poustka, F
2008-01-01
Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.
ERIC Educational Resources Information Center
Van Eylen, Lien; Boets, Bart; Steyaert, Jean; Wagemans, Johan; Noens, Ilse
2018-01-01
Local and global visual processing abilities and processing style were investigated in individuals with autism spectrum disorder (ASD) versus typically developing individuals, children versus adolescents and boys versus girls. Individuals with ASD displayed more attention to detail in daily life, while laboratory tasks showed slightly reduced…
Global-local visual biases correspond with visual-spatial orientation.
Basso, Michael R; Lowery, Natasha
2004-02-01
Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.
Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.
Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R
2001-11-01
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.
Local and Global Auditory Processing: Behavioral and ERP Evidence
Sanders, Lisa D.; Poeppel, David
2007-01-01
Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively. PMID:17113115
Gestalt Perception and Local-Global Processing in High-Functioning Autism
ERIC Educational Resources Information Center
Bolte, Sven; Holtmann, Martin; Poustka, Fritz; Scheurich, Armin; Schmidt, Lutz
2007-01-01
This study examined gestalt perception in high-functioning autism (HFA) and its relation to tasks indicative of local visual processing. Data on of gestalt perception, visual illusions (VI), hierarchical letters (HL), Block Design (BD) and the Embedded Figures Test (EFT) were collected in adult males with HFA, schizophrenia, depression and…
Truppa, Valentina; Carducci, Paola; De Simone, Diego Antonio; Bisazza, Angelo; De Lillo, Carlo
2017-03-01
In the last two decades, comparative research has addressed the issue of how the global and local levels of structure of visual stimuli are processed by different species, using Navon-type hierarchical figures, i.e. smaller local elements that form larger global configurations. Determining whether or not the variety of procedures adopted to test different species with hierarchical figures are equivalent is of crucial importance to ensure comparability of results. Among non-human species, global/local processing has been extensively studied in tufted capuchin monkeys using matching-to-sample tasks with hierarchical patterns. Local dominance has emerged consistently in these New World primates. In the present study, we assessed capuchins' processing of hierarchical stimuli with a method frequently adopted in studies of global/local processing in non-primate species: the conflict-choice task. Different from the matching-to-sample procedure, this task involved processing local and global information retained in long-term memory. Capuchins were trained to discriminate between consistent hierarchical stimuli (similar global and local shape) and then tested with inconsistent hierarchical stimuli (different global and local shapes). We found that capuchins preferred the hierarchical stimuli featuring the correct local elements rather than those with the correct global configuration. This finding confirms that capuchins' local dominance, typically observed using matching-to-sample procedures, is also expressed as a local preference in the conflict-choice task. Our study adds to the growing body of comparative studies on visual grouping functions by demonstrating that the methods most frequently used in the literature on global/local processing produce analogous results irrespective of extent of the involvement of memory processes.
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents.
Manjaly, Zina M; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E; Brieber, Sarah; Marshall, John C; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R
2007-03-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is "weak central coherence", i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents
Manjaly, Zina M.; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E.; Brieber, Sarah; Marshall, John C.; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R.
2007-01-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is “weak central coherence”, i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients. PMID:17240169
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
A tale of two agnosias: distinctions between form and integrative agnosia.
Riddoch, M Jane; Humphreys, Glyn W; Akhtar, Nabeela; Allen, Harriet; Bracewell, R Martyn; Schofield, Andrew J
2008-02-01
The performance of two patients with visual agnosia was compared across a number of tests examining visual processing. The patients were distinguished by having dorsal and medial ventral extrastriate lesions. While inanimate objects were disadvantaged for the patient with a dorsal extrastriate lesion, animate items are disadvantaged for the patient with the medial ventral extrastriate lesion. The patients also showed contrasting patterns of performance on the Navon Test: The patient with a dorsal extrastriate lesion demonstrated a local bias while the patient with a medial ventral extrastriate lesion had a global bias. We propose that the dorsal and medial ventral visual pathways may be characterized at an extrastriate level by differences in local relative to more global visual processing and that this can link to visually based category-specific deficits in processing.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Romei, Vincenzo; Thut, Gregor; Mok, Robert M; Schyns, Philippe G; Driver, Jon
2012-03-01
Although oscillatory activity in the alpha band was traditionally associated with lack of alertness, more recent work has linked it to specific cognitive functions, including visual attention. The emerging method of rhythmic transcranial magnetic stimulation (TMS) allows causal interventional tests for the online impact on performance of TMS administered in short bursts at a particular frequency. TMS bursts at 10 Hz have recently been shown to have an impact on spatial visual attention, but any role in featural attention remains unclear. Here we used rhythmic TMS at 10 Hz to assess the impact on attending to global or local components of a hierarchical Navon-like stimulus (D. Navon (1977) Forest before trees: The precedence of global features in visual perception. Cognit. Psychol., 9, 353), in a paradigm recently used with TMS at other frequencies (V. Romei, J. Driver, P.G. Schyns & G. Thut. (2011) Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing. Curr. Biol., 2, 334-337). In separate groups, left or right posterior parietal sites were stimulated at 10 Hz just before presentation of the hierarchical stimulus. Participants had to identify either the local or global component in separate blocks. Right parietal 10 Hz stimulation (vs. sham) significantly impaired global processing without affecting local processing, while left parietal 10 Hz stimulation vs. sham impaired local processing with a minor trend to enhance global processing. These 10 Hz outcomes differed significantly from stimulation at other frequencies (i.e. 5 or 20 Hz) over the same site in other recent work with the same paradigm. These dissociations confirm differential roles of the two hemispheres in local vs. global processing, and reveal a frequency-specific role for stimulation in the alpha band for regulating feature-based visual attention. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
ERIC Educational Resources Information Center
Maljaars, J. P. W.; Noens, I. L. J.; Scholte, E. M.; Verpoorten, R. A. W.; van Berckelaer-Onnes, I. A.
2011-01-01
Background: The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the…
Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.
Kanaya, Shoko; Yokosawa, Kazuhiko
2011-02-01
Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.
Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine
2016-05-11
The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.
Parallel and serial grouping of image elements in visual perception.
Houtkamp, Roos; Roelfsema, Pieter R
2010-12-01
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.
Gestalten of today: early processing of visual contours and surfaces.
Kovács, I
1996-12-01
While much is known about the specialized, parallel processing streams of low-level vision that extract primary visual cues, there is only limited knowledge about the dynamic interactions between them. How are the fragments, caught by local analyzers, assembled together to provide us with a unified percept? How are local discontinuities in texture, motion or depth evaluated with respect to object boundaries and surface properties? These questions are presented within the framework of orientation-specific spatial interactions of early vision. Key observations of psychophysics, anatomy and neurophysiology on interactions of various spatial and temporal ranges are reviewed. Aspects of the functional architecture and possible neural substrates of local orientation-specific interactions are discussed, underlining their role in the integration of information across the visual field, and particularly in contour integration. Examples are provided demonstrating that global context, such as contour closure and figure-ground assignment, affects these local interactions. It is illustrated that figure-ground assignment is realized early in visual processing, and that the pattern of early interactions also brings about an effective and sparse coding of visual shape. Finally, it is concluded that the underlying functional architecture is not only dynamic and context dependent, but the pattern of connectivity depends as much on past experience as on actual stimulation.
Clarke, Aaron M.; Herzog, Michael H.; Francis, Gregory
2014-01-01
Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a system's computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena. PMID:25374554
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-01-01
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas. DOI: http://dx.doi.org/10.7554/eLife.15252.001 PMID:27596931
Cocchi, Luca; Sale, Martin V; L Gollo, Leonardo; Bell, Peter T; Nguyen, Vinh T; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B
2016-09-06
Within the primate visual system, areas at lower levels of the cortical hierarchy process basic visual features, whereas those at higher levels, such as the frontal eye fields (FEF), are thought to modulate sensory processes via feedback connections. Despite these functional exchanges during perception, there is little shared activity between early and late visual regions at rest. How interactions emerge between regions encompassing distinct levels of the visual hierarchy remains unknown. Here we combined neuroimaging, non-invasive cortical stimulation and computational modelling to characterize changes in functional interactions across widespread neural networks before and after local inhibition of primary visual cortex or FEF. We found that stimulation of early visual cortex selectively increased feedforward interactions with FEF and extrastriate visual areas, whereas identical stimulation of the FEF decreased feedback interactions with early visual areas. Computational modelling suggests that these opposing effects reflect a fast-slow timescale hierarchy from sensory to association areas.
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D
2012-07-01
We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.
Attentional selection of relative SF mediates global versus local processing: evidence from EEG.
Flevaris, Anastasia V; Bentin, Shlomo; Robertson, Lynn C
2011-06-13
Previous research on functional hemispheric differences in visual processing has associated global perception with low spatial frequency (LSF) processing biases of the right hemisphere (RH) and local perception with high spatial frequency (HSF) processing biases of the left hemisphere (LH). The Double Filtering by Frequency (DFF) theory expanded this hypothesis by proposing that visual attention selects and is directed to relatively LSFs by the RH and relatively HSFs by the LH, suggesting a direct causal relationship between SF selection and global versus local perception. We tested this idea in the current experiment by comparing activity in the EEG recorded at posterior right and posterior left hemisphere sites while participants' attention was directed to global or local levels of processing after selection of relatively LSFs versus HSFs in a previous stimulus. Hemispheric asymmetry in the alpha band (8-12 Hz) during preparation for global versus local processing was modulated by the selected SF. In contrast, preparatory activity associated with selection of SF was not modulated by the previously attended level (global/local). These results support the DFF theory that top-down attentional selection of SF mediates global and local processing.
Salient sounds activate human visual cortex automatically
McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.
2013-01-01
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530
Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.
Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A
2014-08-01
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
Visual acuity in adults with Asperger's syndrome: no evidence for "eagle-eyed" vision.
Falkmer, Marita; Stuart, Geoffrey W; Danielsson, Henrik; Bram, Staffan; Lönebrink, Mikael; Falkmer, Torbjörn
2011-11-01
Autism spectrum conditions (ASC) are defined by criteria comprising impairments in social interaction and communication. Altered visual perception is one possible and often discussed cause of difficulties in social interaction and social communication. Recently, Ashwin et al. suggested that enhanced ability in local visual processing in ASC was due to superior visual acuity, but that study has been the subject of methodological criticism, placing the findings in doubt. The present study investigated visual acuity thresholds in 24 adults with Asperger's syndrome and compared their results with 25 control subjects with the 2 Meter 2000 Series Revised ETDRS Chart. The distribution of visual acuities within the two groups was highly similar, and none of the participants had superior visual acuity. Superior visual acuity in individuals with Asperger's syndrome could not be established, suggesting that differences in visual perception in ASC are not explained by this factor. A continued search for explanations of superior ability in local visual processing in persons with ASC is therefore warranted. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon
2012-01-01
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116
Scheurich, Armin; Fellgiebel, Andreas; Müller, Mattias J; Poustka, Fritz; Bölte, Sven
2010-03-01
The cognitive phenotype of autism spectrum disorders (ASD) is characterized among other things by local processing (weak central coherence). It was examined whether a test that measures identification of fragmented pictures (FBT) is able to seize this preference for local processing. The FBT performance of 15 patients with ASD, 16 with depression, 16 with schizophrenia and of 16 control subjects was compared. In addition, two tests well known to be sensitive to local processing were assessed, namely the Embedded Figures Test (EFT) and the Block Design Test (BDT). ASD patients demonstrated a preference for local processing. Difficulties in global processing, or more specifically in gestalt perception (FBT), were accompanied by good performance on the EFT and BDT as expected. Controlling for age and nonverbal intelligence (ANCOVA) reduced differences to trends. However, the calculation of difference scores (i.e., subtraction of FBT from EFT performance) resulted in significant differences between ASD and control groups even after controlling for of age and intelligence. The FBT is a suitable exploratory test of local visual processing in ASD. In particular, a difference criterion can be generated (FBT vs. EFT) that discriminates between ASD and clinical as well as healthy control groups.
Explaining seeing? Disentangling qualia from perceptual organization.
Ibáñez, Agustin; Bekinschtein, Tristan
2010-09-01
Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.
Global and local processing near the left and right hands
Langerak, Robin M.; La Mantia, Carina L.; Brown, Liana E.
2013-01-01
Visual targets can be processed more quickly and reliably when a hand is placed near the target. Both unimodal and bimodal representations of hands are largely lateralized to the contralateral hemisphere, and since each hemisphere demonstrates specialized cognitive processing, it is possible that targets appearing near the left hand may be processed differently than targets appearing near the right hand. The purpose of this study was to determine whether visual processing near the left and right hands interacts with hemispheric specialization. We presented hierarchical-letter stimuli (e.g., small characters used as local elements to compose large characters at the global level) near the left or right hands separately and instructed participants to discriminate the presence of target letters (X and O) from non-target letters (T and U) at either the global or local levels as quickly as possible. Targets appeared at either the global or local level of the display, at both levels, or were absent from the display; participants made foot-press responses. When discriminating target presence at the global level, participants responded more quickly to stimuli presented near the left hand than near either the right hand or in the no-hand condition. Hand presence did not influence target discrimination at the local level. Our interpretation is that left-hand presence may help participants discriminate global information, a right hemisphere (RH) process, and that the left hand may influence visual processing in a way that is distinct from the right hand. PMID:24194725
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Accelerating Demand Paging for Local and Remote Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Ellsworth, David
2001-01-01
This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Reduced Distractibility in a Remote Culture
de Fockert, Jan W.; Caparos, Serge; Linnell, Karina J.; Davidoff, Jules
2011-01-01
Background In visual processing, there are marked cultural differences in the tendency to adopt either a global or local processing style. A remote culture (the Himba) has recently been reported to have a greater local bias in visual processing than Westerners. Here we give the first evidence that a greater, and remarkable, attentional selectivity provides the basis for this local bias. Methodology/Principal Findings In Experiment 1, Eriksen-type flanker interference was measured in the Himba and in Western controls. In both groups, responses to the direction of a task-relevant target arrow were affected by the compatibility of task-irrelevant distractor arrows. However, the Himba showed a marked reduction in overall flanker interference compared to Westerners. The smaller interference effect in the Himba occurred despite their overall slower performance than Westerners, and was evident even at a low level of perceptual load of the displays. In Experiment 2, the attentional selectivity of the Himba was further demonstrated by showing that their attention was not even captured by a moving singleton distractor. Conclusions/Significance We argue that the reduced distractibility in the Himba is clearly consistent with their tendency to prioritize the analysis of local details in visual processing. PMID:22046275
The forest, the trees, and the leaves: Differences of processing across development.
Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier
2016-08-01
To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed the global figure) that was presented to 112 participants aged 5, 6, 9, and 21 years old. This task allowed us to explore (a) which level is perceptively the most salient at each age (i.e., the fastest detected level) and (b) what kind of attentional processing occurs for each level across development (i.e., efficient processing: detection time does not increase with the number of stimuli on the display; less efficient processing: detection time increases linearly with the growing number of distractors). Results showed that the global level was the most salient at 5 years of age, whereas the global and intermediate levels were both salient for 9-year-olds and adults. Interestingly, at 6 years of age, the intermediate level was the most salient level. Second, all participants showed an efficient processing of both intermediate and global levels of hierarchical stimuli, and a less efficient processing of the local level, suggesting a local disadvantage rather than a global advantage in visual search. The cognitive cost for selecting the local target was higher for 5- and 6-year-old children compared to 9-year-old children and adults. These results are discussed with regards to the development of executive control. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Recurrent V1-V2 interaction in early visual boundary processing.
Neumann, H; Sepp, W
1999-11-01
A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.
ERIC Educational Resources Information Center
Sadler-Smith, Eugene
2011-01-01
The study explored various facets of the intuitive style and its relevance to learning and education from a dual-processing perspective, namely how it relates to other style constructs (analytical; visual and verbal; local and global), gender, and superstitious reasoning and how these are likely to impact upon learning in educational and…
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios
2018-06-21
Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.
ERIC Educational Resources Information Center
Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.
2013-01-01
Relative to low scorers, high scorers on the Autism-Spectrum Quotient (AQ) show enhanced performance on the Embedded Figures Test and the Radial Frequency search task (RFST), which has been attributed to both enhanced local processing and differences in combining global percepts. We investigate the role of local and global processing further using…
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
Two subdivisions of macaque LIP process visual-oculomotor information differently.
Chen, Mo; Li, Bing; Guang, Jing; Wei, Linyu; Wu, Si; Liu, Yu; Zhang, Mingsha
2016-10-11
Although the cerebral cortex is thought to be composed of functionally distinct areas, the actual parcellation of area and assignment of function are still highly controversial. An example is the much-studied lateral intraparietal cortex (LIP). Despite the general agreement that LIP plays an important role in visual-oculomotor transformation, it remains unclear whether the area is primary sensory- or motor-related (the attention-intention debate). Although LIP has been considered as a functionally unitary area, its dorsal (LIPd) and ventral (LIPv) parts differ in local morphology and long-distance connectivity. In particular, LIPv has much stronger connections with two oculomotor centers, the frontal eye field and the deep layers of the superior colliculus, than does LIPd. Such anatomical distinctions imply that compared with LIPd, LIPv might be more involved in oculomotor processing. We tested this hypothesis physiologically with a memory saccade task and a gap saccade task. We found that LIP neurons with persistent memory activities in memory saccade are primarily provoked either by visual stimulation (vision-related) or by both visual and saccadic events (vision-saccade-related) in gap saccade. The distribution changes from predominantly vision-related to predominantly vision-saccade-related as the recording depth increases along the dorsal-ventral dimension. Consistently, the simultaneously recorded local field potential also changes from visual evoked to saccade evoked. Finally, local injection of muscimol (GABA agonist) in LIPv, but not in LIPd, dramatically decreases the proportion of express saccades. With these results, we conclude that LIPd and LIPv are more involved in visual and visual-saccadic processing, respectively.
Zmigrod, Sharon; Zmigrod, Leor; Hommel, Bernhard
2015-01-01
While recent studies have investigated how processes underlying human creativity are affected by particular visual-attentional states, we tested the impact of more stable attention-related preferences. These were assessed by means of Navon's global-local task, in which participants respond to the global or local features of large letters constructed from smaller letters. Three standard measures were derived from this task: the sizes of the global precedence effect, the global interference effect (i.e., the impact of incongruent letters at the global level on local processing), and the local interference effect (i.e., the impact of incongruent letters at the local level on global processing). These measures were correlated with performance in a convergent-thinking creativity task (the Remote Associates Task), a divergent-thinking creativity task (the Alternate Uses Task), and a measure of fluid intelligence (Raven's matrices). Flexibility in divergent thinking was predicted by the local interference effect while convergent thinking was predicted by intelligence only. We conclude that a stronger attentional bias to visual information about the "bigger picture" promotes cognitive flexibility in searching for multiple solutions.
Visual attention spreads broadly but selects information locally.
Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro
2016-10-19
Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.
Combining local and global limitations of visual search.
Põder, Endel
2017-04-01
There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.
Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C
2015-08-19
The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
Bölte, Sven; Poustka, Fritz
2006-06-01
The objective of this study was to investigate the tendency for local processing style ('weak central coherence') and executive dysfunction in parents of subjects with an autism spectrum disorder (ASD) compared with parents of individuals with early onset schizophrenia (EOS) and mental retardation (MR). Sixty-two parents of subjects with ASD, 36 parents of subjects with EOS and 30 parents of subjects with MR were examined. Data on two scales indicative of local visual processing (Embedded Figures Test, Block Design) and on three executive function tests (Wisconsin Card Sorting Test, Tower of Hanoi, Trailmaking Test) were collected for all participants. Parents of subjects with ASD performed significantly faster on the Embedded Figures Test compared with both control samples. No other substantial group differences were observed. The findings indicate that an increased tendency for local processing in terms of visual disembedding could be a relatively specific core feature of the broader cognitive phenotype of autism in parents.
Jonkman, L M; Kenemans, J L; Kemner, C; Verbaten, M N; van Engeland, H
2004-07-01
This study was aimed at investigating whether attention-deficit hyperactivity disorder (ADHD) children suffer from specific early selective attention deficits in the visual modality with the aid of event-related brain potentials (ERPs). Furthermore, brain source localization was applied to identify brain areas underlying possible deficits in selective visual processing in ADHD children. A two-channel visual color selection task was administered to 18 ADHD and 18 control subjects in the age range of 7-13 years and ERP activity was derived from 30 electrodes. ADHD children exhibited lower perceptual sensitivity scores resulting in poorer target selection. The ERP data suggested an early selective-attention deficit as manifested in smaller frontal positive activity (frontal selection positivity; FSP) in ADHD children around 200 ms whereas later occipital and fronto-central negative activity (OSN and N2b; 200-400 ms latency) appeared to be unaffected. Source localization explained the FSP by posterior-medial equivalent dipoles in control subjects, which may reflect the contribution of numerous surrounding areas. ADHD children have problems with selective visual processing that might be caused by a specific early filtering deficit (absent FSP) occurring around 200 ms. The neural sources underlying these problems have to be further identified. Source localization also suggested abnormalities in the 200-400 ms time range, pertaining to the distribution of attention-modulated activity in lateral frontal areas.
Wijnen, V J M; Eilander, H J; de Gelder, B; van Boxtel, G J M
2014-11-01
Auditory stimulation is often used to evoke responses in unresponsive patients who have suffered severe brain injury. In order to investigate visual responses, we examined visual evoked potentials (VEPs) and behavioral responses to visual stimuli in vegetative patients during recovery to consciousness. Behavioral responses to visual stimuli (visual localization, comprehension of written commands, and object manipulation) and flash VEPs were repeatedly examined in eleven vegetative patients every two weeks for an average period of 2.6months, and patients' VEPs were compared to a healthy control group. Long-term outcome of the patients was assessed 2-3years later. Visual response scores increased during recovery to consciousness for all scales: visual localization, comprehension of written commands, and object manipulation. VEP amplitudes were smaller, and latencies were longer in the patient group relative to the controls. VEPs characteristics at first measurement were related to long-term outcome up to three years after injury. Our findings show the improvement of visual responding with recovery from the vegetative state to consciousness. Elementary visual processing is present, yet according to VEP responses, poorer in vegetative and minimally conscious state than in healthy controls, and remains poorer when patients recovered to consciousness. However, initial VEPs are related to long-term outcome. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Liu, Han-Hsuan
2016-01-01
Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. SIGNIFICANCE STATEMENT Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. PMID:27383604
Liu, Han-Hsuan; Cline, Hollis T
2016-07-06
Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. Copyright © 2016 the authors 0270-6474/16/367325-15$15.00/0.
ERIC Educational Resources Information Center
Hayward, Dana A.; Shore, David I.; Ristic, Jelena; Kovshoff, Hanna; Iarocci, Grace; Mottron, Laurent; Burack, Jacob A.
2012-01-01
We utilized a hierarchical figures task to determine the default level of perceptual processing and the flexibility of visual processing in a group of high-functioning young adults with autism (n = 12) and a typically developing young adults, matched by chronological age and IQ (n = 12). In one task, participants attended to one level of the…
The Development of Global and Local Processing: A Comparison of Children to Adults
ERIC Educational Resources Information Center
Peterson, Eric; Peterson, Robin L.
2014-01-01
In light of the adult model of a hemispheric asymmetry of global and local processing, we compared children (M [subscript age] = 8.4 years) to adults in a global-local reaction time (RT) paradigm. Hierarchical designs (large shapes made of small shapes) were presented randomly to each visual field, and participants were instructed to identify…
ERIC Educational Resources Information Center
Guy, Maggie W.; Reynolds, Greg D.; Zhang, Dantong
2013-01-01
Event-related potentials (ERPs) were utilized in an investigation of 21 six-month-olds' attention to and processing of global and local properties of hierarchical patterns. Overall, infants demonstrated an advantage for processing the overall configuration (i.e., global properties) of local features of hierarchical patterns; however,…
Wolford, E; Pesonen, A-K; Heinonen, K; Lahti, M; Pyhälä, R; Lahti, J; Hovi, P; Strang-Karlsson, S; Eriksson, J G; Andersson, S; Järvenpää, A-L; Kajantie, E; Räikkönen, K
2017-04-01
Visual processing problems may be one underlying factor for cognitive impairments related to autism spectrum disorders (ASDs). We examined associations between ASD-traits (Autism-Spectrum Quotient) and visual processing performance (Rey-Osterrieth Complex Figure Test; Block Design task of the Wechsler Adult Intelligence Scale-III) in young adults (mean age=25.0, s.d.=2.1 years) born preterm at very low birth weight (VLBW; <1500 g) (n=101) or at term (n=104). A higher level of ASD-traits was associated with slower global visual processing speed among the preterm VLBW, but not among the term-born group (P<0.04 for interaction). Our findings suggest that the associations between ASD-traits and visual processing may be restricted to individuals born preterm, and related specifically to global, not local visual processing. Our findings point to cumulative social and neurocognitive problems in those born preterm at VLBW.
The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.
Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal
2016-01-01
Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.
ERIC Educational Resources Information Center
Martens, Ulla; Hubner, Ronald
2013-01-01
While hemispheric differences in global/local processing have been reported by various studies, it is still under dispute at which processing stage they occur. Primarily, it was assumed that these asymmetries originate from an early perceptual stage. Instead, the content-level binding theory (Hubner & Volberg, 2005) suggests that the hemispheres…
Hiding the Disk and Network Latency of Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Ellsworth, David
2001-01-01
This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.
Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.
Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe
2015-07-01
Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Krakowski, Claire-Sara; Borst, Grégoire; Vidal, Julie; Houdé, Olivier; Poirel, Nicolas
2018-09-01
Visual environments are composed of global shapes and local details that compete for attentional resources. In adults, the global level is processed more rapidly than the local level, and global information must be inhibited in order to process local information when the local information and global information are in conflict. Compared with adults, children present less of a bias toward global visual information and appear to be more sensitive to the density of local elements that constitute the global level. The current study aimed, for the first time, to investigate the key role of inhibition during global/local processing in children. By including two different conditions of global saliency during a negative priming procedure, the results showed that when the global level was salient (dense hierarchical figures), 7-year-old children and adults needed to inhibit the global level to process the local information. However, when the global level was less salient (sparse hierarchical figures), only children needed to inhibit the local level to process the global information. These results confirm a weaker global bias and the greater impact of saliency in children than in adults. Moreover, the results indicate that, regardless of age, inhibition of the most salient hierarchical level is systematically required to select the less salient but more relevant level. These findings have important implications for future research in this area. Copyright © 2018 Elsevier Inc. All rights reserved.
A systematic review of visual processing and associated treatments in body dysmorphic disorder.
Beilharz, F; Castle, D J; Grace, S; Rossell, S L
2017-07-01
Recent advances in body dysmorphic disorder (BDD) have explored abnormal visual processing, yet it is unclear how this relates to treatment. The aim of this study was to summarize our current understanding of visual processing in BDD and review associated treatments. The literature was collected through PsycInfo and PubMed. Visual processing articles were included if written in English after 1970, had a specific BDD group compared to healthy controls and were not case studies. Due to the lack of research regarding treatments associated with visual processing, case studies were included. A number of visual processing abnormalities are present in BDD, including face recognition, emotion identification, aesthetics, object recognition and gestalt processing. Differences to healthy controls include a dominance of detailed local processing over global processing and associated changes in brain activation in visual regions. Perceptual mirror retraining and some forms of self-exposure have demonstrated improved treatment outcomes, but have not been examined in isolation from broader treatments. Despite these abnormalities in perception, particularly concerning face and emotion recognition, few BDD treatments attempt to specifically remediate this. The development of a novel visual training programme which addresses these widespread abnormalities may provide an effective treatment modality. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Organization of the Drosophila larval visual circuit
Gendre, Nanae; Neagu-Maier, G Larisa; Fetter, Richard D; Schneider-Mizell, Casey M; Truman, James W; Zlatic, Marta; Cardona, Albert
2017-01-01
Visual systems transduce, process and transmit light-dependent environmental cues. Computation of visual features depends on photoreceptor neuron types (PR) present, organization of the eye and wiring of the underlying neural circuit. Here, we describe the circuit architecture of the visual system of Drosophila larvae by mapping the synaptic wiring diagram and neurotransmitters. By contacting different targets, the two larval PR-subtypes create two converging pathways potentially underlying the computation of ambient light intensity and temporal light changes already within this first visual processing center. Locally processed visual information then signals via dedicated projection interneurons to higher brain areas including the lateral horn and mushroom body. The stratified structure of the larval optic neuropil (LON) suggests common organizational principles with the adult fly and vertebrate visual systems. The complete synaptic wiring diagram of the LON paves the way to understanding how circuits with reduced numerical complexity control wide ranges of behaviors.
Model-based analysis of pattern motion processing in mouse primary visual cortex
Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.
2015-01-01
Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738
Zmigrod, Sharon; Zmigrod, Leor; Hommel, Bernhard
2015-01-01
While recent studies have investigated how processes underlying human creativity are affected by particular visual-attentional states, we tested the impact of more stable attention-related preferences. These were assessed by means of Navon’s global-local task, in which participants respond to the global or local features of large letters constructed from smaller letters. Three standard measures were derived from this task: the sizes of the global precedence effect, the global interference effect (i.e., the impact of incongruent letters at the global level on local processing), and the local interference effect (i.e., the impact of incongruent letters at the local level on global processing). These measures were correlated with performance in a convergent-thinking creativity task (the Remote Associates Task), a divergent-thinking creativity task (the Alternate Uses Task), and a measure of fluid intelligence (Raven’s matrices). Flexibility in divergent thinking was predicted by the local interference effect while convergent thinking was predicted by intelligence only. We conclude that a stronger attentional bias to visual information about the “bigger picture” promotes cognitive flexibility in searching for multiple solutions. PMID:26579030
Cross-modal orienting of visual attention.
Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J
2016-03-01
This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Real-Time Visualization of an HPF-based CFD Simulation
NASA Technical Reports Server (NTRS)
Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.
Brederoo, Sanne G; Nieuwenstein, Mark R; Lorist, Monicque M; Cornelissen, Frans W
2017-12-01
It is often assumed that the human brain processes the global and local properties of visual stimuli in a lateralized fashion, with a left hemisphere (LH) specialization for local detail, and a right hemisphere (RH) specialization for global form. However, the evidence for such global-local lateralization stems predominantly from studies using linguistic stimuli, the processing of which has shown to be LH lateralized in itself. In addition, some studies have reported a reversal of global-local lateralization when using non-linguistic stimuli. Accordingly, it remains unclear whether global-local lateralization may in fact be stimulus-specific. To address this issue, we asked participants to respond to linguistic and non-linguistic stimuli that were presented in the right and left visual fields, allowing for first access by the LH and RH, respectively. The results showed global-RH and local-LH advantages for both stimulus types, but the global lateralization effect was larger for linguistic stimuli. Furthermore, this pattern of results was found to be robust, as it was observed regardless of two other task manipulations. We conclude that the instantiation and direction of global and local lateralization is not stimulus-specific. However, the magnitude of global,-but not local-, lateralization is dependent on stimulus type. Copyright © 2017 Elsevier Inc. All rights reserved.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
The Forest, the Trees, and the Leaves: Differences of Processing across Development
ERIC Educational Resources Information Center
Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier
2016-01-01
To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed…
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Prefrontal Cortex Is Critical for Contextual Processing: Evidence from Brain Lesions
ERIC Educational Resources Information Center
Fogelson, Noa; Shah, Mona; Scabini, Donatella; Knight, Robert T.
2009-01-01
We investigated the role of prefrontal cortex (PFC) in local contextual processing using a combined event-related potentials and lesion approach. Local context was defined as the occurrence of a short predictive series of visual stimuli occurring before delivery of a target event. Targets were preceded by either randomized sequences of standards…
Epicenters of dynamic connectivity in the adaptation of the ventral visual system.
Prčkovska, Vesna; Huijbers, Willem; Schultz, Aaron; Ortiz-Teran, Laura; Peña-Gomez, Cleofe; Villoslada, Pablo; Johnson, Keith; Sperling, Reisa; Sepulcre, Jorge
2017-04-01
Neuronal responses adapt to familiar and repeated sensory stimuli. Enhanced synchrony across wide brain systems has been postulated as a potential mechanism for this adaptation phenomenon. Here, we used recently developed graph theory methods to investigate hidden connectivity features of dynamic synchrony changes during a visual repetition paradigm. Particularly, we focused on strength connectivity changes occurring at local and distant brain neighborhoods. We found that connectivity reorganization in visual modal cortex-such as local suppressed connectivity in primary visual areas and distant suppressed connectivity in fusiform areas-is accompanied by enhanced local and distant connectivity in higher cognitive processing areas in multimodal and association cortex. Moreover, we found a shift of the dynamic functional connections from primary-visual-fusiform to primary-multimodal/association cortex. These findings suggest that repetition-suppression is made possible by reorganization of functional connectivity that enables communication between low- and high-order areas. Hum Brain Mapp 38:1965-1976, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Simulators for training in ultrasound guided procedures.
Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle
2013-06-01
The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.
Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn
2016-01-01
While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.
Acquiring skill at medical image inspection: learning localized in early visual processes
NASA Astrophysics Data System (ADS)
Sowden, Paul T.; Davies, Ian R. L.; Roling, Penny; Watt, Simon J.
1997-04-01
Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.
A theta rhythm in macaque visual cortex and its attentional modulation
Spyropoulos, Georgios; Fries, Pascal
2018-01-01
Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632
Hemispheric asymmetry of liking for representational and abstract paintings.
Nadal, Marcos; Schiavi, Susanna; Cattaneo, Zaira
2017-10-13
Although the neural correlates of the appreciation of aesthetic qualities have been the target of much research in the past decade, few experiments have explored the hemispheric asymmetries in underlying processes. In this study, we used a divided visual field paradigm to test for hemispheric asymmetries in men and women's preference for abstract and representational artworks. Both male and female participants liked representational paintings more when presented in the right visual field, whereas preference for abstract paintings was unaffected by presentation hemifield. We hypothesize that this result reflects a facilitation of the sort of visual processes relevant to laypeople's liking for art-specifically, local processing of highly informative object features-when artworks are presented in the right visual field, given the left hemisphere's advantage in processing such features.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
A web-based solution for 3D medical image visualization
NASA Astrophysics Data System (ADS)
Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo
2015-03-01
In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.
Global processing in amblyopia: a review
Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin
2014-01-01
Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383
A Novel Locally Linear KNN Method With Applications to Visual Recognition.
Liu, Qingfeng; Liu, Chengjun
2017-09-01
A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.
A studyforrest extension, retinotopic mapping and localization of higher visual areas
Sengupta, Ayan; Kaule, Falko R.; Guntupalli, J. Swaroop; Hoffmann, Michael B.; Häusler, Christian; Stadler, Jörg; Hanke, Michael
2016-01-01
The studyforrest (http://studyforrest.org) dataset is likely the largest neuroimaging dataset on natural language and story processing publicly available today. In this article, along with a companion publication, we present an update of this dataset that extends its scope to vision and multi-sensory research. 15 participants of the original cohort volunteered for a series of additional studies: a clinical examination of visual function, a standard retinotopic mapping procedure, and a localization of higher visual areas—such as the fusiform face area. The combination of this update, the previous data releases for the dataset, and the companion publication, which includes neuroimaging and eye tracking data from natural stimulation with a motion picture, form an extremely versatile and comprehensive resource for brain imaging research—with almost six hours of functional neuroimaging data across five different stimulation paradigms for each participant. Furthermore, we describe employed paradigms and present results that document the quality of the data for the purpose of characterising major properties of participants’ visual processing stream. PMID:27779618
Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe
Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan
2006-01-01
The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158
Developmental Changes in the Processing of Hierarchical Shapes Continue into Adolescence.
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Geldart, Sybil; Maurer, Daphne; de Schonen, Scania
2003-01-01
Three experiments obtained same-different judgments from children and adults to trace normal development of local and global processing of hierarchical visual forms. Findings indicated that reaction time was faster on global trials than local trials; bias was stronger in children and diminished to adult levels between ages 10 and 14. Reaction time…
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
Localized direction selective responses in the dendrites of visual interneurons of the fly
2010-01-01
Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983
Running VisIt Software on the Peregrine System | High-Performance Computing
kilobyte range. VisIt features a robust remote visualization capability. VisIt can be started on a local machine and used to visualize data on a remote compute cluster.The remote machine must be able to send VisIt module must be loaded as part of this process. To enable remote visualization the 'module load
Mental visualization of objects from cross-sectional images
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2011-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
Orientation selectivity sharpens motion detection in Drosophila
Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.
2015-01-01
SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
Impairment in local and global processing and set-shifting in body dysmorphic disorder
Kerwin, Lauren; Hovav, Sarit; Helleman, Gerhard; Feusner, Jamie D.
2014-01-01
Body dysmorphic disorder (BDD) is characterized by distressing and often debilitating preoccupations with misperceived defects in appearance. Research suggests that aberrant visual processing may contribute to these misperceptions. This study used two tasks to probe global and local visual processing as well as set shifting in individuals with BDD. Eighteen unmedicated individuals with BDD and 17 non-clinical controls completed two global-local tasks. The embedded figures task requires participants to determine which of three complex figures contained a simpler figure embedded within it. The Navon task utilizes incongruent stimuli comprised of a large letter (global level) made up of smaller letters (local level). The outcome measures were response time and accuracy rate. On the embedded figures task, BDD individuals were slower and less accurate than controls. On the Navon task, BDD individuals processed both global and local stimuli slower and less accurately than controls, and there was a further decrement in performance when shifting attention between the different levels of stimuli. Worse insight correlated with poorer performance on both tasks. Taken together, these results suggest abnormal global and local processing for non-appearance related stimuli among BDD individuals, in addition to evidence of poor set-shifting abilities. Moreover, these abnormalities appear to relate to the important clinical variable of poor insight. Further research is needed to explore these abnormalities and elucidate their possible role in the development and/or persistence of BDD symptoms. PMID:24972487
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.
Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo
2018-01-12
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.
Xi-cam: Flexible High Throughput Data Processing for GISAXS
NASA Astrophysics Data System (ADS)
Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander
With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.
NASA Astrophysics Data System (ADS)
Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng
2018-04-01
This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.
Difference in Visual Processing Assessed by Eye Vergence Movements
Solé Puig, Maria; Puigcerver, Laura; Aznar-Casanova, J. Antonio; Supèr, Hans
2013-01-01
Orienting visual attention is closely linked to the oculomotor system. For example, a shift of attention is usually followed by a saccadic eye movement and can be revealed by micro saccades. Recently we reported a novel role of another type of eye movement, namely eye vergence, in orienting visual attention. Shifts in visuospatial attention are characterized by the response modulation to a selected target. However, unlike (micro-) saccades, eye vergence movements do not carry spatial information (except for depth) and are thus not specific to a particular visual location. To further understand the role of eye vergence in visual attention, we tested subjects with different perceptual styles. Perceptual style refers to the characteristic way individuals perceive environmental stimuli, and is characterized by a spatial difference (local vs. global) in perceptual processing. We tested field independent (local; FI) and field dependent (global; FD) observers in a cue/no-cue task and a matching task. We found that FI observers responded faster and had stronger modulation in eye vergence in both tasks than FD subjects. The results may suggest that eye vergence modulation may relate to the trade-off between the size of spatial region covered by attention and the processing efficiency of sensory information. Alternatively, vergence modulation may have a role in the switch in cortical state to prepare the visual system for new incoming sensory information. In conclusion, vergence eye movements may be added to the growing list of functions of fixational eye movements in visual perception. However, further studies are needed to elucidate its role. PMID:24069140
Schmid, Anita M.; Victor, Jonathan D.
2014-01-01
When analyzing a visual image, the brain has to achieve several goals quickly. One crucial goal is to rapidly detect parts of the visual scene that might be behaviorally relevant, while another one is to segment the image into objects, to enable an internal representation of the world. Both of these processes can be driven by local variations in any of several image attributes such as luminance, color, and texture. Here, focusing on texture defined by local orientation, we propose that the two processes are mediated by separate mechanisms that function in parallel. More specifically, differences in orientation can cause an object to “pop out” and attract visual attention, if its orientation differs from that of the surrounding objects. Differences in orientation can also signal a boundary between objects and therefore provide useful information for image segmentation. We propose that contextual response modulations in primary visual cortex (V1) are responsible for orientation pop-out, while a different kind of receptive field nonlinearity in secondary visual cortex (V2) is responsible for orientation-based texture segmentation. We review a recent experiment that led us to put forward this hypothesis along with other research literature relevant to this notion. PMID:25064441
Admissible Diffusion Wavelets and Their Applications in Space-Frequency Processing.
Hou, Tingbo; Qin, Hong
2013-01-01
As signal processing tools, diffusion wavelets and biorthogonal diffusion wavelets have been propelled by recent research in mathematics. They employ diffusion as a smoothing and scaling process to empower multiscale analysis. However, their applications in graphics and visualization are overshadowed by nonadmissible wavelets and their expensive computation. In this paper, our motivation is to broaden the application scope to space-frequency processing of shape geometry and scalar fields. We propose the admissible diffusion wavelets (ADW) on meshed surfaces and point clouds. The ADW are constructed in a bottom-up manner that starts from a local operator in a high frequency, and dilates by its dyadic powers to low frequencies. By relieving the orthogonality and enforcing normalization, the wavelets are locally supported and admissible, hence facilitating data analysis and geometry processing. We define the novel rapid reconstruction, which recovers the signal from multiple bands of high frequencies and a low-frequency base in full resolution. It enables operations localized in both space and frequency by manipulating wavelet coefficients through space-frequency filters. This paper aims to build a common theoretic foundation for a host of applications, including saliency visualization, multiscale feature extraction, spectral geometry processing, etc.
The devil is in the detail: brain dynamics in preparation for a global-local task.
Leaver, Echo E; Low, Kathy A; DiVacri, Assunta; Merla, Arcangelo; Fabiani, Monica; Gratton, Gabriele
2015-08-01
When analyzing visual scenes, it is sometimes important to determine the relevant "grain" size. Attention control mechanisms may help direct our processing to the intended grain size. Here we used the event-related optical signal, a method possessing high temporal and spatial resolution, to examine the involvement of brain structures within the dorsal attention network (DAN) and the visual processing network (VPN) in preparation for the appropriate level of analysis. Behavioral data indicate that the small features of a hierarchical stimulus (local condition) are more difficult to process than the large features (global condition). Consistent with this finding, cues predicting a local trial were associated with greater DAN activation. This activity was bilateral but more pronounced in the left hemisphere, where it showed a frontal-to-parietal progression over time. Furthermore, the amount of DAN activation, especially in the left hemisphere and in parietal regions, was predictive of subsequent performance. Although local cues elicited left-lateralized DAN activity, no preponderantly right activity was observed for global cues; however, the data indicated an interaction between level of analysis (local vs. global) and hemisphere in VPN. They further showed that local processing involves structures in the ventral VPN, whereas global processing involves structures in the dorsal VPN. These results indicate that in our study preparation for analyzing different size features is an asymmetric process, in which greater preparation is required to focus on small rather than large features, perhaps because of their lesser salience. This preparation involves the same DAN used for other attention control operations.
The relationship between level of autistic traits and local bias in the context of the McGurk effect
Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio
2015-01-01
The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
NASA Astrophysics Data System (ADS)
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
Mental Visualization of Objects from Cross-Sectional Images
ERIC Educational Resources Information Center
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2012-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object…
Causal Inference for Spatial Constancy across Saccades
Atsma, Jeroen; Maij, Femke; Koppen, Mathieu; Irwin, David E.; Medendorp, W. Pieter
2016-01-01
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. PMID:26967730
Yoon, Jong H; Sheremata, Summer L; Rokem, Ariel; Silver, Michael A
2013-10-31
Cognitive and information processing deficits are core features and important sources of disability in schizophrenia. Our understanding of the neural substrates of these deficits remains incomplete, in large part because the complexity of impairments in schizophrenia makes the identification of specific deficits very challenging. Vision science presents unique opportunities in this regard: many years of basic research have led to detailed characterization of relationships between structure and function in the early visual system and have produced sophisticated methods to quantify visual perception and characterize its neural substrates. We present a selective review of research that illustrates the opportunities for discovery provided by visual studies in schizophrenia. We highlight work that has been particularly effective in applying vision science methods to identify specific neural abnormalities underlying information processing deficits in schizophrenia. In addition, we describe studies that have utilized psychophysical experimental designs that mitigate generalized deficit confounds, thereby revealing specific visual impairments in schizophrenia. These studies contribute to accumulating evidence that early visual cortex is a useful experimental system for the study of local cortical circuit abnormalities in schizophrenia. The high degree of similarity across neocortical areas of neuronal subtypes and their patterns of connectivity suggests that insights obtained from the study of early visual cortex may be applicable to other brain regions. We conclude with a discussion of future studies that combine vision science and neuroimaging methods. These studies have the potential to address pressing questions in schizophrenia, including the dissociation of local circuit deficits vs. impairments in feedback modulation by cognitive processes such as spatial attention and working memory, and the relative contributions of glutamatergic and GABAergic deficits.
Salient sounds activate human visual cortex automatically.
McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A
2013-05-22
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Compression and reflection of visually evoked cortical waves
Xu, Weifeng; Huang, Xiaoying; Takagaki, Kentaroh; Wu, Jian-young
2007-01-01
Summary Neuronal interactions between primary and secondary visual cortical areas are important for visual processing, but the spatiotemporal patterns of the interaction are not well understood. We used voltage-sensitive dye imaging to visualize neuronal activity in rat visual cortex and found novel visually evoked waves propagating from V1 to other visual areas. A primary wave originated in the monocular area of V1 and was “compressed” when propagating to V2. A reflected wave initiated after compression and propagated backward into V1. The compression occurred at the V1/V2 border, and local GABAA inhibition is important for the compression. The compression/reflection pattern provides a two-phase modulation: V1 is first depolarized by the primary wave and then V1 and V2 are simultaneously depolarized by the reflected and primary waves, respectively. The compression/reflection pattern only occurred for evoked but not for spontaneous waves, suggesting that it is organized by an internal mechanism associated with visual processing. PMID:17610821
Characteristic sounds facilitate visual search.
Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2008-06-01
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.
Modulation of human extrastriate visual processing by selective attention to colours and words.
Nobre, A C; Allison, T; McCarthy, G
1998-07-01
The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity
Schettini, Raimondo
2018-01-01
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art. PMID:29329268
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Smid, H G; Jakob, A; Heinze, H J
1999-03-01
What cognitive processes underlie event-related brain potential (ERP) effects related to visual multidimensional selective attention and how are these processes organized? We recorded ERPs when participants attended to one conjunction of color, global shape and local shape and ignored other conjunctions of these attributes in three discriminability conditions. Attending to color and shape produced three ERP effects: frontal selection positivity (FSP), central negativity (N2b), and posterior selection negativity (SN). The results suggested that the processes underlying SN and N2b perform independent within-dimension selections, whereas the process underlying the FSP performs hierarchical between-dimension selections. At posterior electrodes, manipulation of discriminability changed the ERPs to the relevant but not to the irrelevant stimuli, suggesting that the SN does not concern the selection process itself but rather a cognitive process initiated after selection is finished. Other findings suggested that selection of multiple visual attributes occurs in parallel.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Characterizing the effects of feature salience and top-down attention in the early visual system.
Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank
2017-07-01
The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
2011-01-01
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
Object localization, discrimination, and grasping with the optic nerve visual prosthesis.
Duret, Florence; Brelén, Måten E; Lambert, Valerie; Gérard, Benoît; Delbeke, Jean; Veraart, Claude
2006-01-01
This study involved a volunteer completely blind from retinis pigmentosa who had previously been implanted with an optic nerve visual prosthesis. The aim of this two-year study was to train the volunteer to localize a given object in nine different positions, to discriminate the object within a choice of six, and then to grasp it. In a closed-loop protocol including a head worn video camera, the nerve was stimulated whenever a part of the processed image of the object being scrutinized matched the center of an elicitable phosphene. The accessible visual field included 109 phosphenes in a 14 degrees x 41 degrees area. Results showed that training was required to succeed in the localization and discrimination tasks, but practically no training was required for grasping the object. The volunteer was able to successfully complete all tasks after training. The volunteer systematically performed several left-right and bottom-up scanning movements during the discrimination task. Discrimination strategies included stimulation phases and no-stimulation phases of roughly similar duration. This study provides a step towards the practical use of the optic nerve visual prosthesis in current daily life.
The influence of spontaneous activity on stimulus processing in primary visual cortex.
Schölvinck, M L; Friston, K J; Rees, G
2012-02-01
Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.
Good Food, Bad Food, and White Rice: Understanding Child Feeding Using Visual-Narrative Elicitation.
Wentworth, Chelsea
2017-01-01
Visual-narrative elicitation, a process combining photo elicitation and pile sorting in applied medical anthropology, sheds light on food consumption patterns in urban areas of Vanuatu where childhood malnutrition is a persistent problem. Groups of participants took photographs of the foods they feed their children, and the resources and barriers they encounter in accessing foodstuffs. This revealed how imported and local foods are assigned value as "good" or "bad" foods when contributing to dietary diversity and creating appropriate meals for children, particularly in the context of consuming white rice. The process of gathering and working with photographs illuminated the complex negotiations in which caregivers engaged when making food and nutritional choices for their children. At the nexus of visual and medical anthropology, the visual-narrative elicitation process yielded nuanced, comprehensive understandings of how caregivers value the various foods they feed their children.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks
NASA Astrophysics Data System (ADS)
Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas
2017-03-01
PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.
Value associations of irrelevant stimuli modify rapid visual orienting.
Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E
2010-08-01
In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
Characteristic sounds facilitate visual search
Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2009-01-01
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253
Positive Contrast Visualization of Nitinol Devices using Susceptibility Gradient Mapping
Vonken, Evert-jan P.A.; Schär, Michael; Stuber, Matthias
2008-01-01
MRI visualization of devices is traditionally based on the signal loss due to T2* effects originating from the local susceptibility differences. To visualize nitinol devices with positive contrast a recently introduced post processing method is adapted to map the induced susceptibility gradients. This method operates on regular gradient echo MR images and maps the shift in k-space in a (small) neighborhood of every voxel by Fourier analysis followed by a center of mass calculation. The quantitative map of the local shifts generates the positive contrast image of the devices, while areas without susceptibility gradients render a background with noise only. The positive signal response of this method depends only on the choice of the voxel neighborhood size. The properties of the method are explained and the visualization of a nitinol wire and two stents are shown for illustration. PMID:18727096
Xi-cam: a versatile interface for data visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Xi-cam: a versatile interface for data visualization and analysis
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke; ...
2018-05-31
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Deficit in visual temporal integration in autism spectrum disorders.
Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru
2010-04-07
Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.
De Lillo, Carlo; Spinozzi, Giovanna; Truppa, Valentina; Naylor, Donna M
2005-05-01
Results obtained with preschool children (Homo sapiens) were compared with results previously obtained from capuchin monkeys (Cebus apella) in matching-to-sample tasks featuring hierarchical visual stimuli. In Experiment 1, monkeys, in contrast with children, showed an advantage in matching the stimuli on the basis of their local features. These results were replicated in a 2nd experiment in which control trials enabled the authors to rule out that children used spurious cues to solve the matching task. In a 3rd experiment featuring conditions in which the density of the stimuli was manipulated, monkeys' accuracy in the processing of the global shape of the stimuli was negatively affected by the separation of the local elements, whereas children's performance was robust across testing conditions. Children's response latencies revealed a global precedence in the 2nd and 3rd experiments. These results show differences in the processing of hierarchical stimuli by humans and monkeys that emerge early during childhood. 2005 APA, all rights reserved
Ince, Robin A. A.; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J.; Rousselet, Guillaume A.; Schyns, Philippe G.
2016-01-01
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. PMID:27550865
Dynamic sound localization in cats
Ruhland, Janet L.; Jones, Amy E.
2015-01-01
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772
Sellers, Kristin K.; Bennett, Davis V.; Hutt, Axel; Williams, James H.
2015-01-01
During general anesthesia, global brain activity and behavioral state are profoundly altered. Yet it remains mostly unknown how anesthetics alter sensory processing across cortical layers and modulate functional cortico-cortical connectivity. To address this gap in knowledge of the micro- and mesoscale effects of anesthetics on sensory processing in the cortical microcircuit, we recorded multiunit activity and local field potential in awake and anesthetized ferrets (Mustela putoris furo) during sensory stimulation. To understand how anesthetics alter sensory processing in a primary sensory area and the representation of sensory input in higher-order association areas, we studied the local sensory responses and long-range functional connectivity of primary visual cortex (V1) and prefrontal cortex (PFC). Isoflurane combined with xylazine provided general anesthesia for all anesthetized recordings. We found that anesthetics altered the duration of sensory-evoked responses, disrupted the response dynamics across cortical layers, suppressed both multimodal interactions in V1 and sensory responses in PFC, and reduced functional cortico-cortical connectivity between V1 and PFC. Together, the present findings demonstrate altered sensory responses and impaired functional network connectivity during anesthesia at the level of multiunit activity and local field potential across cortical layers. PMID:25833839
No psychological effect of color context in a low level vision task
Pedley, Adam; Wade, Alex R
2013-01-01
Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task. Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280
No psychological effect of color context in a low level vision task.
Pedley, Adam; Wade, Alex R
2013-01-01
A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task. A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η (2) = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas.
Basic visual function and cortical thickness patterns in posterior cortical atrophy.
Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J
2011-09-01
Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.
Local and Global Processing: Observations from a Remote Culture
ERIC Educational Resources Information Center
Davidoff, Jules; Fonteneau, Elisabeth; Fagot, Joel
2008-01-01
In Experiment 1, a normal adult population drawn from a remote culture (Himba) in northern Namibia made similarity matches to [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. "Cognitive Psychology", 9, 353-383] hierarchical figures. The Himba showed a local bias stronger than that has been…
Adaptation to implied tilt: extensive spatial extrapolation of orientation gradients
Roach, Neil W.; Webb, Ben S.
2013-01-01
To extract the global structure of an image, the visual system must integrate local orientation estimates across space. Progress is being made toward understanding this integration process, but very little is known about whether the presence of structure exerts a reciprocal influence on local orientation coding. We have previously shown that adaptation to patterns containing circular or radial structure induces tilt-aftereffects (TAEs), even in locations where the adapting pattern was occluded. These spatially “remote” TAEs have novel tuning properties and behave in a manner consistent with adaptation to the local orientation implied by the circular structure (but not physically present) at a given test location. Here, by manipulating the spatial distribution of local elements in noisy circular textures, we demonstrate that remote TAEs are driven by the extrapolation of orientation structure over remarkably large regions of visual space (more than 20°). We further show that these effects are not specific to adapting stimuli with polar orientation structure, but require a gradient of orientation change across space. Our results suggest that mechanisms of visual adaptation exploit orientation gradients to predict the local pattern content of unfilled regions of space. PMID:23882243
How does experience modulate auditory spatial processing in individuals with blindness?
Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C
2015-05-01
Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.
Maekawa, Toshihiko; Miyanaga, Yuka; Takahashi, Kenji; Takamiya, Naomi; Ogata, Katsuya; Tobimatsu, Shozo
2017-01-01
Individuals with autism spectrum disorder (ASD) show superior performance in processing fine detail, but often exhibit impaired gestalt face perception. The ventral visual stream from the primary visual cortex (V1) to the fusiform gyrus (V4) plays an important role in form (including faces) and color perception. The aim of this study was to investigate how the ventral stream is functionally altered in ASD. Visual evoked potentials were recorded in high-functioning ASD adults (n = 14) and typically developing (TD) adults (n = 14). We used three types of visual stimuli as follows: isoluminant chromatic (red/green, RG) gratings, high-contrast achromatic (black/white, BW) gratings with high spatial frequency (HSF, 5.3 cycles/degree), and face (neutral, happy, and angry faces) stimuli. Compared with TD controls, ASD adults exhibited longer N1 latency for RG, shorter N1 latency for BW, and shorter P1 latency, but prolonged N170 latency, for face stimuli. Moreover, a greater difference in latency between P1 and N170, or between N1 for BW and N170 (i.e., the prolongation of cortico-cortical conduction time between V1 and V4) was observed in ASD adults. These findings indicate that ASD adults have enhanced fine-form (local HSF) processing, but impaired color processing at V1. In addition, they exhibit impaired gestalt face processing due to deficits in integration of multiple local HSF facial information at V4. Thus, altered ventral stream function may contribute to abnormal social processing in ASD. PMID:28146575
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
'Where' and 'what' in visual search.
Atkinson, J; Braddick, O J
1989-01-01
A line segment target can be detected among distractors of a different orientation by a fast 'preattentive' process. One view is that this depends on detection of a 'feature gradient', which enables subjects to locate where the target is without necessarily identifying what it is. An alternative view is that a target can be identified as distinctive in a particular 'feature map' without subjects knowing where it is in that map. Experiments are reported in which briefly exposed arrays of line segments were followed by a pattern mask, and the threshold stimulus-mask interval determined for three tasks: 'what'--subjects reported whether the target was vertical or horizontal among oblique distractors; 'coarse where'--subjects reported whether the target was in the upper or lower half of the array; 'fine where'--subjects reported whether or not the target was in a set of four particular array positions. The threshold interval was significantly lower for the 'coarse where' than for the 'what' task, indicating that, even though localization in this task depends on the target's orientation difference, this localization is possible without absolute identification of target orientation. However, for the 'fine where' task, intervals as long as or longer than those for the 'what' task were required. It appears either that different localization processes work at different levels of resolution, or that a single localization process, independent of identification, can increase its resolution at the expense of processing speed. These possibilities are discussed in terms of distinct neural representations of the visual field and fixed or variable localization processes acting upon them.
From elements to perception: local and global processing in visual neurons.
Spillmann, L
1999-01-01
Gestalt psychologists in the early part of the century challenged psychophysical notions that perceptual phenomena can be understood from a punctate (atomistic) analysis of the elements present in the stimulus. Their ideas slowed later attempts to explain vision in terms of single-cell recordings from individual neurons. A rapprochement between Gestalt phenomenology and neurophysiology seemed unlikely when the first ECVP was held in Marburg, Germany, in 1978. Since that time, response properties of neurons have been discovered that invite an interpretation of visual phenomena (including illusions) in terms of neuronal processing by long-range interactions, as first proposed by Mach and Hering in the last century. This article traces a personal journey into the early days of neurophysiological vision research to illustrate the progress that has taken place from the first attempts to correlate single-cell responses with visual perceptions. Whereas initially the receptive-field properties of individual classes of cells--e.g., contrast, wavelength, orientation, motion, disparity, and spatial-frequency detectors--were used to account for relatively simple visual phenomena, nowadays complex perceptions are interpreted in terms of long-range interactions, involving many neurons. This change in paradigm from local to global processing was made possible by recent findings, in the cortex, on horizontal interactions and backward propagation (feedback loops) in addition to classical feedforward processing. These mechanisms are exemplified by studies of the tilt effect and tilt aftereffect, direction-specific motion adaptation, illusory contours, filling-in and fading, figure--ground segregation by orientation and motion contrast, and pop-out in dynamic visual-noise patterns. Major questions for future research and a discussion of their epistemological implications conclude the article.
Sounds activate visual cortex and improve visual discrimination.
Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A
2014-07-16
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.
Gender differences in global-local perception? Evidence from orientation and shape judgments.
Kimchi, Ruth; Amishav, Rama; Sulitzeanu-Kenan, Anat
2009-01-01
Direct examinations of gender differences in global-local processing are sparse, and the results are inconsistent. We examined this issue with a visuospatial judgment task and with a shape judgment task. Women and men were presented with hierarchical stimuli that varied in closure (open or closed shape) or in line orientation (oblique or horizontal/vertical) at the global or local level. The task was to classify the stimuli on the basis of the variation at the global level (global classification) or at the local level (local classification). Women's classification by closure (global or local) was more accurate than men's for stimuli that varied in closure on both levels, suggesting a female advantage in discriminating shape properties. No gender differences were observed in global-local processing bias. Women and men exhibited a global advantage, and they did not differ in their speed of global or local classification, with only one exception. Women were slower than men in local classification by orientation when the to-be-classified lines were embedded in a global line with a different orientation. This finding suggests that women are more distracted than men by misleading global oriented context when performing local orientation judgments, perhaps because women and men differ in their ability to use cognitive schemes to compensate for the distracting effects of the global context. Our findings further suggest that whether or not gender differences arise depends not only on the nature of the visual task but also on the visual context.
Escape from harm: linking affective vision and motor responses during active avoidance
Keil, Andreas
2014-01-01
When organisms confront unpleasant objects in their natural environments, they engage in behaviors that allow them to avoid aversive outcomes. Here, we linked visual processing of threat to its behavioral consequences by including a motor response that terminated exposure to an aversive event. Dense-array steady-state visual evoked potentials were recorded in response to conditioned threat and safety signals viewed in active or passive behavioral contexts. The amplitude of neuronal responses in visual cortex increased additively, as a function of emotional value and action relevance. The gain in local cortical population activity for threat relative to safety cues persisted when aversive reinforcement was behaviorally terminated, suggesting a lingering emotionally based response amplification within the visual system. Distinct patterns of long-range neural synchrony emerged between the visual cortex and extravisual regions. Increased coupling between visual and higher-order structures was observed specifically during active perception of threat, consistent with a reorganization of neuronal populations involved in linking sensory processing to action preparation. PMID:24493849
Erlikhman, Gennady; Kellman, Philip J.
2016-01-01
Spatiotemporal boundary formation (SBF) is the perception of illusory boundaries, global form, and global motion from spatially and temporally sparse transformations of texture elements (Shipley and Kellman, 1993a, 1994; Erlikhman and Kellman, 2015). It has been theorized that the visual system uses positions and times of element transformations to extract local oriented edge fragments, which then connect by known interpolation processes to produce larger contours and shapes in SBF. To test this theory, we created a novel display consisting of a sawtooth arrangement of elements that disappeared and reappeared sequentially. Although apparent motion along the sawtooth would be expected, with appropriate spacing and timing, the resulting percept was of a larger, moving, illusory bar. This display approximates the minimal conditions for visual perception of an oriented edge fragment from spatiotemporal information and confirms that such events may be initiating conditions in SBF. Using converging objective and subjective methods, experiments showed that edge formation in these displays was subject to a temporal integration constraint of ~80 ms between element disappearances. The experiments provide clear support for models of SBF that begin with extraction of local edge fragments, and they identify minimal conditions required for this process. We conjecture that these results reveal a link between spatiotemporal object perception and basic visual filtering. Motion energy filters have usually been studied with orientation given spatially by luminance contrast. When orientation is not given in static frames, these same motion energy filters serve as spatiotemporal edge filters, yielding local orientation from discrete element transformations over time. As numerous filters of different characteristic orientations and scales may respond to any simple SBF stimulus, we discuss the aperture and ambiguity problems that accompany this conjecture and how they might be resolved by the visual system. PMID:27445886
Task modulates functional connectivity networks in free viewing behavior.
Seidkhani, Hossein; Nikolaev, Andrey R; Meghanathan, Radha Nila; Pezeshk, Hamid; Masoudi-Nejad, Ali; van Leeuwen, Cees
2017-10-01
In free visual exploration, eye-movement is immediately followed by dynamic reconfiguration of brain functional connectivity. We studied the task-dependency of this process in a combined visual search-change detection experiment. Participants viewed two (nearly) same displays in succession. First time they had to find and remember multiple targets among distractors, so the ongoing task involved memory encoding. Second time they had to determine if a target had changed in orientation, so the ongoing task involved memory retrieval. From multichannel EEG recorded during 200 ms intervals time-locked to fixation onsets, we estimated the functional connectivity using a weighted phase lag index at the frequencies of theta, alpha, and beta bands, and derived global and local measures of the functional connectivity graphs. We found differences between both memory task conditions for several network measures, such as mean path length, radius, diameter, closeness and eccentricity, mainly in the alpha band. Both the local and the global measures indicated that encoding involved a more segregated mode of operation than retrieval. These differences arose immediately after fixation onset and persisted for the entire duration of the lambda complex, an evoked potential commonly associated with early visual perception. We concluded that encoding and retrieval differentially shape network configurations involved in early visual perception, affecting the way the visual input is processed at each fixation. These findings demonstrate that task requirements dynamically control the functional connectivity networks involved in early visual perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Dissociation of neural mechanisms underlying orientation processing in humans
Ling, Sam; Pearson, Joel; Blake, Randolph
2009-01-01
Summary Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and discovery of that property [1, 2] dramatically shaped how we conceptualize visual processing [3–6]. However, much remains unknown about the neural substrates of these basic building blocks of perception, and what is known primarily stems from animal physiology studies. To probe the neural concomitants of orientation processing in humans, we employed repetitive transcranial magnetic stimulation (rTMS) to attenuate neural responses evoked by stimuli presented within a local region of the visual field. Previous physiological studies have shown that rTMS can significantly suppress the neuronal spiking activity, hemodynamic responses, and local field potentials within a focused cortical region [7, 8]. By suppressing neural activity with rTMS, we were able to dissociate components of the neural circuitry underlying two distinct aspects of orientation processing: selectivity and contextual effects. Orientation selectivity gauged by masking was unchanged by rTMS, whereas an otherwise robust orientation repulsion illusion was weakened following rTMS. This dissociation implies that orientation processing relies on distinct mechanisms, only one of which was impacted by rTMS. These results are consistent with models positing that orientation selectivity is largely governed by the patterns of convergence of thalamic afferents onto cortical neurons, with intracortical activity then shaping population responses contained within those orientation-selective cortical neurons. PMID:19682905
Implicit integration in a case of integrative visual agnosia.
Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo
2007-05-15
We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.
O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C
2016-11-09
Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.
Multivariate spatiotemporal visualizations for mobile devices in Flyover Country
NASA Astrophysics Data System (ADS)
Loeffler, S.; Thorn, R.; Myrbo, A.; Roth, R.; Goring, S. J.; Williams, J.
2017-12-01
Visualizing and interacting with complex multivariate and spatiotemporal datasets on mobile devices is challenging due to their smaller screens, reduced processing power, and limited data connectivity. Pollen data require visualizing pollen assemblages spatially, temporally, and across multiple taxa to understand plant community dynamics through time. Drawing from cartography, information visualization, and paleoecology, we have created new mobile-first visualization techniques that represent multiple taxa across many sites and enable user interaction. Using pollen datasets from the Neotoma Paleoecology Database as a case study, the visualization techniques allow ecological patterns and trends to be quickly understood on a mobile device compared to traditional pollen diagrams and maps. This flexible visualization system can be used for datasets beyond pollen, with the only requirements being point-based localities and multiple variables changing through time or depth.
Auditory and visual interactions between the superior and inferior colliculi in the ferret.
Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K
2015-05-01
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
[Symptoms and lesion localization in visual agnosia].
Suzuki, Kyoko
2004-11-01
There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.
Hierarchical Forms Processing in Adults and Children
ERIC Educational Resources Information Center
Harrison, Tamara B.; Stiles, Joan
2009-01-01
Two experiments examined child and adult processing of hierarchical stimuli composed of geometric forms. Adults (ages 18-23 years) and children (ages 7-10 years) performed a forced-choice task gauging similarity between visual stimuli consisting of large geometric objects (global level) composed of small geometric objects (local level). The…
The visual analysis of emotional actions.
Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie
2006-01-01
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
Superior haptic-to-visual shape matching in autism spectrum disorders.
Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru
2012-04-01
A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures
Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru
2012-01-01
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
W-tree indexing for fast visual word generation.
Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao
2013-03-01
The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
Ince, Robin A A; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J; Rousselet, Guillaume A; Schyns, Philippe G
2016-08-22
A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. © The Author 2016. Published by Oxford University Press.
The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.
Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R
2012-07-12
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Jarrold, Christopher; Gilchrist, Iain D.; Bender, Alison
2005-01-01
Individuals with autism show relatively strong performance on tasks that require them to identify the constituent parts of a visual stimulus. This is assumed to be the result of a bias towards processing the local elements in a display that follows from a weakened ability to integrate information at the global level. The results of the current…
David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K
2011-10-01
Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.
Behavior in Oblivion: The Neurobiology of Subliminal Priming
Jacobs, Christianne; Sack, Alexander T.
2012-01-01
Subliminal priming refers to behavioral modulation by an unconscious stimulus, and can thus be regarded as a form of unconscious visual processing. Theories on recurrent processing have suggested that the neural correlate of consciousness (NCC) comprises of the non-hierarchical transfer of stimulus-related information. According to these models, the neural correlate of subliminal priming (NCSP) corresponds to the visual processing within the feedforward sweep. Research from cognitive neuroscience on these two concepts and the relationship between them is discussed here. Evidence favoring the necessity of recurrent connectivity for visual awareness is accumulating, although some questions, such as the need for global versus local recurrent processing, are not clarified yet. However, this is not to say that recurrent processing is sufficient for consciousness, as a neural definition of consciousness in terms of recurrent connectivity would imply. We argue that the limited interest cognitive neuroscience currently has for the NCSP is undeserved, because the discovery of the NCSP can give insight into why people do (and do not) express certain behavior. PMID:24962773
Looking without Perceiving: Impaired Preattentive Perceptual Grouping in Autism Spectrum Disorder
Carther-Krone, Tiffany A.; Shomstein, Sarah; Marotta, Jonathan J.
2016-01-01
Before becoming aware of a visual scene, our perceptual system has organized and selected elements in our environment to which attention should be allocated. Part of this process involves grouping perceptual features into a global whole. Individuals with autism spectrum disorders (ASD) rely on a more local processing strategy, which may be driven by difficulties perceptually grouping stimuli. We tested this notion using a line discrimination task in which two horizontal lines were superimposed on a background of black and white dots organized so that, on occasion, the dots induced the Ponzo illusion if perceptually grouped together. Results showed that even though neither group was aware of the illusion, the ASD group was significantly less likely than typically developing group to make perceptual judgments influenced by the illusion, revealing difficulties in preattentive grouping of visual stimuli. This may explain why individuals with ASD rely on local processing strategies, and offers new insight into the mechanism driving perceptual grouping in the typically developing human brain. PMID:27355678
Self-Orientation Modulates the Neural Correlates of Global and Local Processing
Liddell, Belinda J.; Das, Pritha; Battaglini, Eva; Malhi, Gin S.; Felmingham, Kim L.; Whitford, Thomas J.; Bryant, Richard A.
2015-01-01
Differences in self-orientation (or “self-construal”) may affect how the visual environment is attended, but the neural and cultural mechanisms that drive this remain unclear. Behavioral studies have demonstrated that people from Western backgrounds with predominant individualistic values are perceptually biased towards local-level information; whereas people from non-Western backgrounds that support collectivist values are preferentially focused on contextual and global-level information. In this study, we compared two groups differing in predominant individualistic (N = 15) vs collectivistic (N = 15) self-orientation. Participants completed a global/local perceptual conflict task whilst undergoing functional Magnetic Resonance Imaging (fMRI) scanning. When participants high in individualistic values attended to the global level (ignoring the local level), greater activity was observed in the frontoparietal and cingulo-opercular networks that underpin attentional control, compared to the match (congruent) baseline. Participants high in collectivistic values activated similar attentional control networks o only when directly compared with global processing. This suggests that global interference was stronger than local interference in the conflict task in the collectivistic group. Both groups showed increased activity in dorsolateral prefrontal regions involved in resolving perceptual conflict during heightened distractor interference. The findings suggest that self-orientation may play an important role in driving attention networks to facilitate interaction with the visual environment. PMID:26270820
Self-Orientation Modulates the Neural Correlates of Global and Local Processing.
Liddell, Belinda J; Das, Pritha; Battaglini, Eva; Malhi, Gin S; Felmingham, Kim L; Whitford, Thomas J; Bryant, Richard A
2015-01-01
Differences in self-orientation (or "self-construal") may affect how the visual environment is attended, but the neural and cultural mechanisms that drive this remain unclear. Behavioral studies have demonstrated that people from Western backgrounds with predominant individualistic values are perceptually biased towards local-level information; whereas people from non-Western backgrounds that support collectivist values are preferentially focused on contextual and global-level information. In this study, we compared two groups differing in predominant individualistic (N = 15) vs collectivistic (N = 15) self-orientation. Participants completed a global/local perceptual conflict task whilst undergoing functional Magnetic Resonance Imaging (fMRI) scanning. When participants high in individualistic values attended to the global level (ignoring the local level), greater activity was observed in the frontoparietal and cingulo-opercular networks that underpin attentional control, compared to the match (congruent) baseline. Participants high in collectivistic values activated similar attentional control networks o only when directly compared with global processing. This suggests that global interference was stronger than local interference in the conflict task in the collectivistic group. Both groups showed increased activity in dorsolateral prefrontal regions involved in resolving perceptual conflict during heightened distractor interference. The findings suggest that self-orientation may play an important role in driving attention networks to facilitate interaction with the visual environment.
Local and Global Processing in Blind and Sighted Children in a Naming and Drawing Task
ERIC Educational Resources Information Center
Puspitawati, Ira; Jebrane, Ahmed; Vinter, Annie
2014-01-01
This study investigated the spatial analysis of tactile hierarchical patterns in 110 early-blind children aged 6-8 to 16-18 years, as compared to 90 blindfolded sighted children, in a naming and haptic drawing task. The results revealed that regardless of visual status, young children predominantly produced local responses in both tasks, whereas…
Top-down beta oscillatory signaling conveys behavioral context in early visual cortex.
Richter, Craig G; Coppola, Richard; Bressler, Steven L
2018-05-03
Top-down modulation of sensory processing is a critical neural mechanism subserving numerous important cognitive roles, one of which may be to inform lower-order sensory systems of the current 'task at hand' by conveying behavioral context to these systems. Accumulating evidence indicates that top-down cortical influences are carried by directed interareal synchronization of oscillatory neuronal populations, with recent results pointing to beta-frequency oscillations as particularly important for top-down processing. However, it remains to be determined if top-down beta-frequency oscillations indeed convey behavioral context. We measured spectral Granger Causality (sGC) using local field potentials recorded from microelectrodes chronically implanted in visual areas V1/V2, V4, and TEO of two rhesus macaque monkeys, and applied multivariate pattern analysis to the spatial patterns of top-down sGC. We decoded behavioral context by discriminating patterns of top-down (V4/TEO-to-V1/V2) beta-peak sGC for two different task rules governing correct responses to identical visual stimuli. The results indicate that top-down directed influences are carried to visual cortex by beta oscillations, and differentiate task demands even before visual stimulus processing. They suggest that top-down beta-frequency oscillatory processes coordinate processing of sensory information by conveying global knowledge states to early levels of the sensory cortical hierarchy independently of bottom-up stimulus-driven processing.
Maljaars, J P W; Noens, I L J; Scholte, E M; Verpoorten, R A W; van Berckelaer-Onnes, I A
2011-01-01
The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the weak central coherence account. The main focus of the present study is to examine whether enhanced visual perception is also present in low-functioning deaf individuals with and without ASD compared with individuals with ID, and to evaluate the underlying cognitive style in deaf and hearing individuals with ASD. Different sorting tasks (selected from the ComFor) were administered from four subsamples: (1) individuals with ID (n = 68); (2) individuals with ID and ASD (n = 72); (3) individuals with ID and deafness (n = 22); and (4) individuals with ID, ASD and deafness (n = 15). Differences in performance on sorting tasks with meaningful and meaningless materials between the four subgroups were analysed. Age and level of functioning were taken into account. Analyses of covariance revealed that results of deaf individuals with ID and ASD are in line with the results of hearing individuals with ID and ASD. Both groups showed enhanced visual perception, especially on meaningless sorting tasks, when compared with hearing individuals with ID, but not compared with deaf individuals with ID. In ASD either with or without deafness, enhanced visual perception for meaningless information can be understood within the framework of the central coherence theory, whereas in deafness, enhancement in visual perception might be due to a more generally enhanced visual perception as a result of auditory deprivation. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.
Atypical Local Interference Affects Global Processing in Children with Neurofibromatosis Type 1.
Payne, Jonathan M; Porter, Melanie A; Bzishvili, Samantha; North, Kathryn N
2017-05-01
To examine hierarchical visuospatial processing in children with neurofibromatosis type 1 (NF1), a single gene disorder associated with visuospatial impairments, attention deficits, and executive dysfunction. We used a modified Navon paradigm consisting of a large "global" shape composed of smaller "local" shapes that were either congruent (same) or incongruent (different) to the global shape. Participants were instructed to name either the global or local shape within a block. Reaction times, interference ratios, and error rates of children with NF1 (n=30) and typically developing controls (n=24) were compared. Typically developing participants demonstrated the expected global processing bias evidenced by a vulnerability to global interference when naming local stimuli without a cost of congruence when naming global stimuli. NF1 participants, however, experienced significant interference from the unattended level when naming both local and global levels of the stimuli. Findings suggest that children with NF1 do not demonstrate the typical human bias of processing visual information from a global perspective. (JINS, 2017, 23, 446-450).
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
Spatio-Temporal Metabolite Profiling of the Barley Germination Process by MALDI MS Imaging
Gorzolka, Karin; Kölling, Jan; Nattkemper, Tim W.; Niehaus, Karsten
2016-01-01
MALDI mass spectrometry imaging was performed to localize metabolites during the first seven days of the barley germination. Up to 100 mass signals were detected of which 85 signals were identified as 48 different metabolites with highly tissue-specific localizations. Oligosaccharides were observed in the endosperm and in parts of the developed embryo. Lipids in the endosperm co-localized in dependency on their fatty acid compositions with changes in the distributions of diacyl phosphatidylcholines during germination. 26 potentially antifungal hordatines were detected in the embryo with tissue-specific localizations of their glycosylated, hydroxylated, and O-methylated derivates. In order to reveal spatio-temporal patterns in local metabolite compositions, multiple MSI data sets from a time series were analyzed in one batch. This requires a new preprocessing strategy to achieve comparability between data sets as well as a new strategy for unsupervised clustering. The resulting spatial segmentation for each time point sample is visualized in an interactive cluster map and enables simultaneous interactive exploration of all time points. Using this new analysis approach and visualization tool germination-dependent developments of metabolite patterns with single MS position accuracy were discovered. This is the first study that presents metabolite profiling of a cereals’ germination process over time by MALDI MSI with the identification of a large number of peaks of agronomically and industrially important compounds such as oligosaccharides, lipids and antifungal agents. Their detailed localization as well as the MS cluster analyses for on-tissue metabolite profile mapping revealed important information for the understanding of the germination process, which is of high scientific interest. PMID:26938880
Laycock, Robin; Chan, Daniel; Crewther, Sheila G
2017-01-01
One aspect of the social communication impairments that characterize autism spectrum disorder (ASD) include reduced use of often subtle non-verbal social cues. People with ASD, and those with self-reported sub-threshold autistic traits, also show impairments in rapid visual processing of stimuli unrelated to social or emotional properties. Hence, this study sought to investigate whether perceptually non-conscious visual processing is related to autistic traits. A neurotypical sample of thirty young adults completed the Subthreshold Autism Trait Questionnaire and a Posner-like attention cueing task. Continuous Flash Suppression (CFS) was employed to render incongruous hierarchical arrow cues perceptually invisible prior to consciously presented targets. This was achieved via a 10 Hz masking stimulus presented to the dominant eye that suppressed information presented to the non-dominant eye. Non-conscious arrows consisted of local arrow elements pointing in one direction, and forming a global arrow shape pointing in the opposite direction. On each trial, the cue provided either a valid or invalid cue for the spatial location of the subsequent target, depending on which level (global or local) received privileged attention. A significant autism-trait group by global cue validity interaction indicated a difference in the extent of non-conscious local/global cueing between groups. Simple effect analyses revealed that whilst participants with lower autistic traits showed a global arrow cueing effect, those with higher autistic traits demonstrated a small local arrow cueing effect. These results suggest that non-conscious processing biases in local/global attention may be related to individual differences in autistic traits.
Face to face with emotion: holistic face processing is modulated by emotional state.
Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa
2012-01-01
Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.
Network model of top-down influences on local gain and contextual interactions in visual cortex.
Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D
2013-10-22
The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.
Spatial Frequency Priming of Scene Perception in Adolescents with and without ASD
ERIC Educational Resources Information Center
Vanmarcke, Steven; Noens, Ilse; Steyaert, Jean; Wagemans, Johan
2017-01-01
While most typically developing (TD) participants have a coarse-to-fine processing style, people with autism spectrum disorder (ASD) seem to be less globally and more locally biased when processing visual information. The stimulus-specific spatial frequency content might be directly relevant to determine this temporal hierarchy of visual…
A Cognitive Model for Exposition of Human Deception and Counterdeception
1987-10-01
for understanding deception and counterdeceptlon, for developing related tactics, and for stimulating research in cognitive processes. Further...Processing Resources; Attention) BUFFER MEMORY MANAGER (Local) (Problem Solving; Learning; Procedures) BUFFER MEMORY SENSORS Visual, Auditory ...Perception and Misperception in International Politics, Princeton University Press, Princeton, NJ, 1976. Key, W.B., Subliminal Seduction. New
O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.
2016-01-01
Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541
NASA Astrophysics Data System (ADS)
Hu, Jin; Tian, Jie; Pan, Xiaohong; Liu, Jiangang
2007-03-01
The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108 pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy, right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some extent, while part of mismatch in localization between the two methods was also observed. In the future we should apply the method for simultaneous recording of EEG and fMRI to our study.
Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry
Jaworska, Katarzyna; Lages, Martin
2014-01-01
Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063
Left cytoarchitectonic BA 44 processes syntactic gender violations in determiner phrases.
Heim, Stefan; van Ermingen, Muna; Huber, Walter; Amunts, Katrin
2010-10-01
Recent neuroimaging studies make contradictory predictions about the involvement of left Brodmann's area (BA) 44 in processing local syntactic violations in determiner phrases (DPs). Some studies suggest a role for BA 44 in detecting local syntactic violations, whereas others attribute this function to the left premotor cortex. Therefore, the present event-related functional magnetic resonance imaging (fMRI) study investigated whether left-cytoarchitectonic BA 44 was activated when German DPs involving syntactic gender violations were compared with correct DPs (correct: 'der Baum'-the[masculine] tree[masculine]; violated: 'das Baum'--the[neuter] tree[masculine]). Grammaticality judgements were made for both visual and auditory DPs to be able to generalize the results across modalities. Grammaticality judgements involved, among others, left BA 44 and left BA 6 in the premotor cortex for visual and auditory stimuli. Most importantly, activation in left BA 44 was consistently higher for violated than for correct DPs. This finding was behaviourally corroborated by longer reaction times for violated versus correct DPs. Additional brain regions, showing the same effect, included left premotor cortex, supplementary motor area, right middle and superior frontal cortex, and left cerebellum. Based on earlier findings from the literature, the results indicate the involvement of left BA 44 in processing local syntactic violations when these include morphological features, whereas left premotor cortex seems crucial for the detection of local word category violations. © 2010 Wiley-Liss, Inc.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
Numerosity processing in early visual cortex.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
2017-08-15
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
Relationship between visual binding, reentry and awareness.
Koivisto, Mika; Silvanto, Juha
2011-12-01
Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.
Noguchi, Yasuki; Tomoike, Kouta
2016-01-12
Recent studies argue that strongly-motivated positive emotions (e.g. desire) narrow a scope of attention. This argument is mainly based on an observation that, while humans normally respond faster to global than local information of a visual stimulus (global advantage), positive affects eliminated the global advantage by selectively speeding responses to local (but not global) information. In other words, narrowing of attentional scope was indirectly evidenced by the elimination of global advantage (the same speed of processing between global and local information). No study has directly shown that strongly-motivated positive affects induce faster responses to local than global information while excluding a bias for global information (global advantage) in a baseline (emotionally-neutral) condition. In the present study, we addressed this issue by eliminating the global advantage in a baseline (neutral) state. Induction of positive affects under this state resulted in faster responses to local than global information. Our results provided direct evidence that positive affects in high motivational intensity narrow a scope of attention.
Visual attention shifting in autism spectrum disorders.
Richard, Annette E; Lajiness-O'Neill, Renee
2015-01-01
Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.
Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images
NASA Astrophysics Data System (ADS)
Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.
2016-07-01
In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.
Visual EKF-SLAM from Heterogeneous Landmarks †
Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.
2016-01-01
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602
Recurrent Activation of Neural Circuits during Attention to Global and Local Visual Information.
Iglesias-Fuster, Jorge; Piña-Novo, Daniela; Ontivero-Ortega, Marlis; Lage-Castellanos, Agustín; Valdés-Sosa, Mitchell
2018-05-28
The attentional selection of different hierarchical level within compound (Navon) figures has been studied with event related potentials (ERPs), by controlling the ERPs obtained during attention to the global or the local echelon. These studies, using the canonical Navon figures, have produced contradictory results, with doubts regarding the scalp distribution of the effects. Moreover, the evidence about the temporal evolution of the processing of these two levels is not clear. Here, we unveiled global and local letters at distinct times, which enabled separation of their ERP responses. We combine this approach with the temporal generalization methodology, a novel multivariate technique which facilitates exploring the temporal structure of these ERPs. Opposite lateralization patterns were obtained for the selection negativities generated when attending global and local distracters (D statistics, p < .005), with maxima in right and left occipito-temporal scalp regions, respectively (η2 = .111, p < .01; η2 = .042, p < .04). However, both discrimination negativities elicited when comparing targets and distractors at the global or the local level were lateralized to the left hemisphere (η2 = .25, p < .03 and η2 = .142, p < .05 respectively). Recurrent activation patterns were found for both global and local stimuli, with scalp topographies corresponding to early preparatory stages reemerging during the attentional selection process, thus indicating recursive attentional activation. This implies that selective attention to global and local hierarchical levels recycles similar neural correlates at different time points. These neural correlates appear to be mediated by visual extra-striate areas.
NASA Astrophysics Data System (ADS)
Asimov, M. M.; Asimov, R. M.; Rubinov, A. N.
2011-05-01
We propose and examine a new approach to visualizing a local network of cutaneous blood vessels using laser optical methods for applications in biometry and photomedicine. Various optical schemes of the formation of biometrical information on the architecture of blood vessels of skin tissue are analyzed. We developed an optical model of the interaction of the laser radiation with the biological tissue and a mathematical algorithm of processing of measurement results. We show that, in medicine, the visualization of blood vessels makes it possible to calculate and determine regions of disturbance of blood microcirculation and to control tissue hypoxia, as well as to maintain the local concentration of oxygen at a level necessary for the normal cellular metabolism. We propose noninvasive optical methods for modern photomedicine and biometry for diagnostics and elimination of tissue hypoxia and for personality identification and verification via the pattern of cutaneous blood vessels.
A recurrent neural model for proto-object based contour integration and figure-ground segregation.
Hu, Brian; Niebur, Ernst
2017-12-01
Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.
Liu, Tianyin; Yeh, Su-Ling
2018-01-01
The left-side bias (LSB) effect observed in face and expert Chinese character perception is suggested to be an expertise marker for visual object recognition. However, in character perception this effect is limited to characters printed in a familiar font (font-sensitive LSB effect). Here we investigated whether the LSB and font-sensitive LSB effects depend on participants’ familiarity with global structure or local component information of the stimuli through examining their transfer effects across simplified and traditional Chinese scripts: the two Chinese scripts share similar overall structures but differ in the visual complexity of local components in general. We found that LSB in expert Chinese character processing could be transferred to the Chinese script that the readers are unfamiliar with. In contrast, the font-sensitive LSB effect did not transfer, and was limited to characters with the visual complexity the readers were most familiar with. These effects suggest that the LSB effect may be generalized to another visual category with similar overall structures; in contrast, effects of within-category variations such as fonts may depend on familiarity with local component information of the stimuli, and thus may be limited to the exemplars of the category that experts are typically exposed to. PMID:29608570
Wood, Joanne M; Owsley, Cynthia
2014-01-01
The useful field of view test was developed to reflect the visual difficulties that older adults experience with everyday tasks. Importantly, the useful field of view test (UFOV) is one of the most extensively researched and promising predictor tests for a range of driving outcomes measures, including driving ability and crash risk as well as other everyday tasks. Currently available commercial versions of the test can be administered using personal computers; these measure the speed of visual processing for rapid detection and localization of targets under conditions of divided visual attention and in the presence and absence of visual clutter. The test is believed to assess higher-order cognitive abilities, but performance also relies on visual sensory function because in order for targets to be attended to, they must be visible. The format of the UFOV has been modified over the years; the original version estimated the spatial extent of useful field of view, while the latest version measures visual processing speed. While deficits in the useful field of view are associated with functional impairments in everyday activities in older adults, there is also emerging evidence from several research groups that improvements in visual processing speed can be achieved through training. These improvements have been shown to reduce crash risk, and can have a positive impact on health and functional well-being, with the potential to increase the mobility and hence the independence of older adults. © 2014 S. Karger AG, Basel
Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.
2014-01-01
Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516
Mottron, L; Peretz, I; Ménard, E
2000-11-01
A multi-modal abnormality in the integration of parts and whole has been proposed to account for a bias toward local stimuli in individuals with autism (Frith, 1989; Mottron & Belleville, 1993). In the current experiment, we examined the utility of hierarchical models in characterising musical information processing in autistic individuals. Participants were 13 high-functioning individuals with autism and 13 individuals of normal intelligence matched on chronological age, nonverbal IQ, and laterality, and without musical experience. The task consisted of same-different judgements of pairs of melodies. Differential local and global processing was assessed by manipulating the level, local or global, at which modifications occurred. No deficit was found in the two measures of global processing. In contrast, the clinical group performed better than the comparison group in the detection of change in nontransposed, contour-preserved melodies that tap local processing. These findings confirm the existence of a "local bias" in music perception in individuals with autism, but challenge the notion that it is accounted for by a deficit in global music processing. The present study suggests that enhanced processing of elementary physical properties of incoming stimuli, as found previously in the visual modality, may also exist in the auditory modality.
KinImmerse: Macromolecular VR for NMR ensembles
Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C
2009-01-01
Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844
Mender, Bedeho M W; Stringer, Simon M
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.
Mender, Bedeho M. W.; Stringer, Simon M.
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301
Vickers, Douglas; Bovet, Pierre; Lee, Michael D; Hughes, Peter
2003-01-01
The planar Euclidean version of the travelling salesperson problem (TSP) requires finding a tour of minimal length through a two-dimensional set of nodes. Despite the computational intractability of the TSP, people can produce rapid, near-optimal solutions to visually presented versions of such problems. To explain this, MacGregor et al (1999, Perception 28 1417-1428) have suggested that people use a global-to-local process, based on a perceptual tendency to organise stimuli into convex figures. We review the evidence for this idea and propose an alternative, local-to-global hypothesis, based on the detection of least distances between the nodes in an array. We present the results of an experiment in which we examined the relationships between three objective measures and performance measures of optimality and response uncertainty in tasks requiring participants to construct a closed tour or an open path. The data are not well accounted for by a process based on the convex hull. In contrast, results are generally consistent with a locally focused process based initially on the detection of nearest-neighbour clusters. Individual differences are interpreted in terms of a hierarchical process of constructing solutions, and the findings are related to a more general analysis of the role of nearest neighbours in the perception of structure and motion.
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Motion transparency: making models of motion perception transparent.
Snowden; Verstraten
1999-10-01
In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as 'filter selection' and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
Anatomy and physiology of the afferent visual system.
Prasad, Sashank; Galetta, Steven L
2011-01-01
The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.
Microfluidic local perfusion chambers for the visualization and manipulation of synapses
Taylor, Anne M.; Dieterich, Daniela C.; Ito, Hiroshi T.; Kim, Sally A.; Schuman, Erin M.
2010-01-01
Summary The polarized nature of neurons as well as the size and density of synapses complicates the manipulation and visualization of cell biological processes that control synaptic function. Here we developed a microfluidic local perfusion (μLP) chamber to access and manipulate synaptic regions and pre- and post-synaptic compartments in vitro. This chamber directs the formation of synapses in >100 parallel rows connecting separate neuron populations. A perfusion channel transects the parallel rows allowing access to synaptic regions with high spatial and temporal resolution. We used this chamber to investigate synapse-to-nucleus signaling. Using the calcium indicator dye, Fluo-4, we measured changes in calcium at dendrites and somata, following local perfusion of glutamate. Exploiting the high temporal resolution of the chamber, we exposed synapses to “spaced” or “massed” application of glutamate and then examined levels of pCREB in somata. Lastly, we applied the metabotropic receptor agonist, DHPG, to dendrites and observed increases in Arc transcription and Arc transcript localization. PMID:20399729
A special purpose knowledge-based face localization method
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad; Jassim, Sabah
2008-04-01
This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.
Web-based interactive 2D/3D medical image processing and visualization software.
Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid
2010-05-01
There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Local Versus Global Effects of Isoflurane Anesthesia on Visual Processing in the Fly Brain
2016-01-01
Abstract What characteristics of neural activity distinguish the awake and anesthetized brain? Drugs such as isoflurane abolish behavioral responsiveness in all animals, implying evolutionarily conserved mechanisms. However, it is unclear whether this conservation is reflected at the level of neural activity. Studies in humans have shown that anesthesia is characterized by spatially distinct spectral and coherence signatures that have also been implicated in the global impairment of cortical communication. We questioned whether anesthesia has similar effects on global and local neural processing in one of the smallest brains, that of the fruit fly (Drosophila melanogaster). Using a recently developed multielectrode technique, we recorded local field potentials from different areas of the fly brain simultaneously, while manipulating the concentration of isoflurane. Flickering visual stimuli (‘frequency tags’) allowed us to track evoked responses in the frequency domain and measure the effects of isoflurane throughout the brain. We found that isoflurane reduced power and coherence at the tagging frequency (13 or 17 Hz) in central brain regions. Unexpectedly, isoflurane increased power and coherence at twice the tag frequency (26 or 34 Hz) in the optic lobes of the fly, but only for specific stimulus configurations. By modeling the periodic responses, we show that the increase in power in peripheral areas can be attributed to local neuroanatomy. We further show that the effects on coherence can be explained by impacted signal-to-noise ratios. Together, our results show that general anesthesia has distinct local and global effects on neuronal processing in the fruit fly brain. PMID:27517084
Local Versus Global Effects of Isoflurane Anesthesia on Visual Processing in the Fly Brain.
Cohen, Dror; Zalucki, Oressia H; van Swinderen, Bruno; Tsuchiya, Naotsugu
2016-01-01
What characteristics of neural activity distinguish the awake and anesthetized brain? Drugs such as isoflurane abolish behavioral responsiveness in all animals, implying evolutionarily conserved mechanisms. However, it is unclear whether this conservation is reflected at the level of neural activity. Studies in humans have shown that anesthesia is characterized by spatially distinct spectral and coherence signatures that have also been implicated in the global impairment of cortical communication. We questioned whether anesthesia has similar effects on global and local neural processing in one of the smallest brains, that of the fruit fly (Drosophila melanogaster). Using a recently developed multielectrode technique, we recorded local field potentials from different areas of the fly brain simultaneously, while manipulating the concentration of isoflurane. Flickering visual stimuli ('frequency tags') allowed us to track evoked responses in the frequency domain and measure the effects of isoflurane throughout the brain. We found that isoflurane reduced power and coherence at the tagging frequency (13 or 17 Hz) in central brain regions. Unexpectedly, isoflurane increased power and coherence at twice the tag frequency (26 or 34 Hz) in the optic lobes of the fly, but only for specific stimulus configurations. By modeling the periodic responses, we show that the increase in power in peripheral areas can be attributed to local neuroanatomy. We further show that the effects on coherence can be explained by impacted signal-to-noise ratios. Together, our results show that general anesthesia has distinct local and global effects on neuronal processing in the fruit fly brain.
Local contextual processing of abstract and meaningful real-life images in professional athletes.
Fogelson, Noa; Fernandez-Del-Olmo, Miguel; Acero, Rafael Martín
2012-05-01
We investigated the effect of abstract versus real-life meaningful images from sports on local contextual processing in two groups of professional athletes. Local context was defined as the occurrence of a short predictive series of stimuli occurring before delivery of a target event. EEG was recorded in 10 professional basketball players and 9 professional athletes of individual sports during three sessions. In each session, a different set of visual stimuli were presented: triangles facing left, up, right, or down; four images of a basketball player throwing a ball; four images of a baseball player pitching a baseball. Stimuli consisted of 15 % targets and 85 % of equal numbers of three types of standards. Recording blocks consisted of targets preceded by randomized sequences of standards and by sequences including a predictive sequence signaling the occurrence of a subsequent target event. Subjects pressed a button in response to targets. In all three sessions, reaction times and peak P3b latencies were shorter for predicted targets compared with random targets, the last most informative stimulus of the predictive sequence induced a robust P3b, and N2 amplitude was larger for random targets compared with predicted targets. P3b and N2 peak amplitudes were larger in the professional basketball group in comparison with professional athletes of individual sports, across the three sessions. The findings of this study suggest that local contextual information is processed similarly for abstract and for meaningful images and that professional basketball players seem to allocate more attentional resources in the processing of these visual stimuli.
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887
Jung, Minju; Hwang, Jungsik; Tani, Jun
2015-01-01
It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.
[Sensory loss and brain reorganization].
Fortin, Madeleine; Voss, Patrice; Lassonde, Maryse; Lepore, Franco
2007-11-01
It is without a doubt that humans are first and foremost visual beings. Even though the other sensory modalities provide us with valuable information, it is vision that generally offers the most reliable and detailed information concerning our immediate surroundings. It is therefore not surprising that nearly a third of the human brain processes, in one way or another, visual information. But what happens when the visual information no longer reaches these brain regions responsible for processing it? Indeed numerous medical conditions such as congenital glaucoma, retinis pigmentosa and retinal detachment, to name a few, can disrupt the visual system and lead to blindness. So, do the brain areas responsible for processing visual stimuli simply shut down and become non-functional? Do they become dead weight and simply stop contributing to cognitive and sensory processes? Current data suggests that this is not the case. Quite the contrary, it would seem that congenitally blind individuals benefit from the recruitment of these areas by other sensory modalities to carry out non-visual tasks. In fact, our laboratory has been studying blindness and its consequences on both the brain and behaviour for many years now. We have shown that blind individuals demonstrate exceptional hearing abilities. This finding holds true for stimuli originating from both near and far space. It also holds true, under certain circumstances, for those who lost their sight later in life, beyond a period generally believed to limit the brain changes following the loss of sight. In the case of the early blind, we have shown their ability to localize sounds is strongly correlated with activity in the occipital cortex (the location of the visual processing), demonstrating that these areas are functionally engaged by the task. Therefore it would seem that the plastic nature of the human brain allows them to make new use of the cerebral areas normally dedicated to visual processing.
Lee, Junghee; Cohen, Mark S; Engel, Stephen A; Glahn, David; Nuechterlein, Keith H; Wynn, Jonathan K; Green, Michael F
2010-07-01
Visual masking paradigms assess the early part of visual information processing, which may reflect vulnerability measures for schizophrenia. We examined the neural substrates of visual backward performance in unaffected sibling of schizophrenia patients using functional magnetic resonance imaging (fMRI). Twenty-one unaffected siblings of schizophrenia patients and 19 healthy controls performed a backward masking task and three functional localizer tasks to identify three visual processing regions of interest (ROI): lateral occipital complex (LO), the motion-sensitive area, and retinotopic areas. In the masking task, we systematically manipulated stimulus onset asynchronies (SOAs). We analyzed fMRI data in two complementary ways: 1) an ROI approach for three visual areas, and 2) a whole-brain analysis. The groups did not differ in behavioral performance. For ROI analysis, both groups increased activation as SOAs increased in LO. Groups did not differ in activation levels of the three ROIs. For whole-brain analysis, controls increased activation as a function of SOAs, compared with siblings in several regions (i.e., anterior cingulate cortex, posterior cingulate cortex, inferior prefrontal cortex, inferior parietal lobule). The study found: 1) area LO showed sensitivity to the masking effect in both groups; 2) siblings did not differ from controls in activation of LO; and 3) groups differed significantly in several brain regions outside visual processing areas that have been related to attentional or re-entrant processes. These findings suggest that LO dysfunction may be a disease indicator rather than a risk indicator for schizophrenia. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Right hemispheric dominance in gaze-triggered reflexive shift of attention in humans.
Okada, Takashi; Sato, Wataru; Toichi, Motomi
2006-11-01
Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects, measuring reaction time (RT). A face identification task was also given to determine hemispheric dominance in face processing for each subject. RT differences between valid and invalid cues were larger when presented in the left rather than the right visual field. This held true regardless of individual hemispheric dominance in face processing. Together, these results indicate right hemispheric dominance in gaze-triggered reflexive shifts of attention in normal healthy subjects.
Latychevskaia, Tatiana; Wicki, Flavio; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner
2016-09-14
Visualizing individual charges confined to molecules and observing their dynamics with high spatial resolution is a challenge for advancing various fields in science, ranging from mesoscopic physics to electron transfer events in biological molecules. We show here that the high sensitivity of low-energy electrons to local electric fields can be employed to directly visualize individual charged adsorbates and to study their behavior in a quantitative way. This makes electron holography a unique probing tool for directly visualizing charge distributions with a sensitivity of a fraction of an elementary charge. Moreover, spatial resolution in the nanometer range and fast data acquisition inherent to lens-less low-energy electron holography allows for direct visual inspection of charge transfer processes.
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Geveci, Berk
2014-11-01
The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less
ERIC Educational Resources Information Center
Klin, Ami; Jones, Warren
2006-01-01
The weak central coherence (WCC) account of autism characterizes the learning style of individuals with this condition as favoring localized and fragmented (to the detriment of global and integrative) processing of information. This pattern of learning is thought to lead to deficits in aspects of perception (e.g., face processing), cognition, and…
Visual-Cerebellar Pathways and Their Roles in the Control of Avian Flight.
Wylie, Douglas R; Gutiérrez-Ibáñez, Cristián; Gaede, Andrea H; Altshuler, Douglas L; Iwaniuk, Andrew N
2018-01-01
In this paper, we review the connections and physiology of visual pathways to the cerebellum in birds and consider their role in flight. We emphasize that there are two visual pathways to the cerebellum. One is to the vestibulocerebellum (folia IXcd and X) that originates from two retinal-recipient nuclei that process optic flow: the nucleus of the basal optic root (nBOR) and the pretectal nucleus lentiformis mesencephali (LM). The second is to the oculomotor cerebellum (folia VI-VIII), which receives optic flow information, mainly from LM, but also local visual motion information from the optic tectum, and other visual information from the ventral lateral geniculate nucleus (Glv). The tectum, LM and Glv are all intimately connected with the pontine nuclei, which also project to the oculomotor cerebellum. We believe this rich integration of visual information in the cerebellum is important for analyzing motion parallax that occurs during flight. Finally, we extend upon a suggestion by Ibbotson (2017) that the hypertrophy that is observed in LM in hummingbirds might be due to an increase in the processing demands associated with the pathway to the oculomotor cerebellum as they fly through a cluttered environment while feeding.
Coventry, Kenny R; Christophel, Thomas B; Fehr, Thorsten; Valdés-Conroy, Berenice; Herrmann, Manfred
2013-08-01
When looking at static visual images, people often exhibit mental animation, anticipating visual events that have not yet happened. But what determines when mental animation occurs? Measuring mental animation using localized brain function (visual motion processing in the middle temporal and middle superior temporal areas, MT+), we demonstrated that animating static pictures of objects is dependent both on the functionally relevant spatial arrangement that objects have with one another (e.g., a bottle above a glass vs. a glass above a bottle) and on the linguistic judgment to be made about those objects (e.g., "Is the bottle above the glass?" vs. "Is the bottle bigger than the glass?"). Furthermore, we showed that mental animation is driven by functional relations and language separately in the right hemisphere of the brain but conjointly in the left hemisphere. Mental animation is not a unitary construct; the predictions humans make about the visual world are driven flexibly, with hemispheric asymmetry in the routes to MT+ activation.
NASA Astrophysics Data System (ADS)
Jeong, Samuel; Ito, Yoshikazu; Edwards, Gary; Fujita, Jun-ichi
2018-06-01
The visualization of localized electronic charges on nanocatalysts is expected to yield fundamental information about catalytic reaction mechanisms. We have developed a high-sensitivity detection technique for the visualization of localized charges on a catalyst and their corresponding electric field distribution, using a low-energy beam of 1 to 5 keV electrons and a high-sensitivity scanning transmission electron microscope (STEM) detector. The highest sensitivity for visualizing a localized electric field was ∼0.08 V/µm at a distance of ∼17 µm from a localized charge at 1 keV of the primary electron energy, and a weak local electric field produced by 200 electrons accumulated on the carbon nanotube (CNT) apex can be visualized. We also observed that Au nanoparticles distributed on a CNT forest tended to accumulate a certain amount of charges, about 150 electrons, at a ‑2 V bias.
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Phantom experiments to improve parathyroid lesion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.
2007-12-15
This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Weighted link graphs: a distributed IDS for secondary intrusion detection and defense
NASA Astrophysics Data System (ADS)
Zhou, Mian; Lang, Sheau-Dong
2005-03-01
While a firewall installed at the perimeter of a local network provides the first line of defense against the hackers, many intrusion incidents are the results of successful penetration of the firewalls. One computer"s compromise often put the entire network at risk. In this paper, we propose an IDS that provides a finer control over the internal network. The system focuses on the variations of connection-based behavior of each single computer, and uses a weighted link graph to visualize the overall traffic abnormalities. The functionality of our system is of a distributed personal IDS system that also provides a centralized traffic analysis by graphical visualization. We use a novel weight assignment schema for the local detection within each end agent. The local abnormalities are quantitatively carried out by the node weight and link weight and further sent to the central analyzer to build the weighted link graph. Thus, we distribute the burden of traffic processing and visualization to each agent and make it more efficient for the overall intrusion detection. As the LANs are more vulnerable to inside attacks, our system is designed as a reinforcement to prevent corruption from the inside.
Hippocampal gamma-band Synchrony and pupillary responses index memory during visual search.
Montefusco-Siegmund, Rodrigo; Leonard, Timothy K; Hoffman, Kari L
2017-04-01
Memory for scenes is supported by the hippocampus, among other interconnected structures, but the neural mechanisms related to this process are not well understood. To assess the role of the hippocampus in memory-guided scene search, we recorded local field potentials and multiunit activity from the hippocampus of macaques as they performed goal-directed search tasks using natural scenes. We additionally measured pupil size during scene presentation, which in humans is modulated by recognition memory. We found that both pupil dilation and search efficiency accompanied scene repetition, thereby indicating memory for scenes. Neural correlates included a brief increase in hippocampal multiunit activity and a sustained synchronization of unit activity to gamma band oscillations (50-70 Hz). The repetition effects on hippocampal gamma synchronization occurred when pupils were most dilated, suggesting an interaction between aroused, attentive processing and hippocampal correlates of recognition memory. These results suggest that the hippocampus may support memory-guided visual search through enhanced local gamma synchrony. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Price, D; Tyler, L K; Neto Henriques, R; Campbell, K L; Williams, N; Treder, M S; Taylor, J R; Henson, R N A
2017-06-09
Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy.
Price, D.; Tyler, L. K.; Neto Henriques, R.; Campbell, K. L.; Williams, N.; Treder, M.S.; Taylor, J. R.; Brayne, Carol; Bullmore, Edward T.; Calder, Andrew C.; Cusack, Rhodri; Dalgleish, Tim; Duncan, John; Matthews, Fiona E.; Marslen-Wilson, William D.; Rowe, James B.; Shafto, Meredith A.; Cheung, Teresa; Davis, Simon; Geerligs, Linda; Kievit, Rogier; McCarrey, Anna; Mustafa, Abdur; Samu, David; Tsvetanov, Kamen A.; van Belle, Janna; Bates, Lauren; Emery, Tina; Erzinglioglu, Sharon; Gadie, Andrew; Gerbase, Sofia; Georgieva, Stanimira; Hanley, Claire; Parkin, Beth; Troy, David; Auer, Tibor; Correia, Marta; Gao, Lu; Green, Emma; Allen, Jodie; Amery, Gillian; Amunts, Liana; Barcroft, Anne; Castle, Amanda; Dias, Cheryl; Dowrick, Jonathan; Fair, Melissa; Fisher, Hayley; Goulding, Anna; Grewal, Adarsh; Hale, Geoff; Hilton, Andrew; Johnson, Frances; Johnston, Patricia; Kavanagh-Williamson, Thea; Kwasniewska, Magdalena; McMinn, Alison; Norman, Kim; Penrose, Jessica; Roby, Fiona; Rowland, Diane; Sargeant, John; Squire, Maggie; Stevens, Beth; Stoddart, Aldabra; Stone, Cheryl; Thompson, Tracy; Yazlik, Ozlem; Barnes, Dan; Dixon, Marie; Hillman, Jaya; Mitchell, Joanne; Villis, Laura; Henson, R. N. A.
2017-01-01
Slowing is a common feature of ageing, yet a direct relationship between neural slowing and brain atrophy is yet to be established in healthy humans. We combine magnetoencephalographic (MEG) measures of neural processing speed with magnetic resonance imaging (MRI) measures of white and grey matter in a large population-derived cohort to investigate the relationship between age-related structural differences and visual evoked field (VEF) and auditory evoked field (AEF) delay across two different tasks. Here we use a novel technique to show that VEFs exhibit a constant delay, whereas AEFs exhibit delay that accumulates over time. White-matter (WM) microstructure in the optic radiation partially mediates visual delay, suggesting increased transmission time, whereas grey matter (GM) in auditory cortex partially mediates auditory delay, suggesting less efficient local processing. Our results demonstrate that age has dissociable effects on neural processing speed, and that these effects relate to different types of brain atrophy. PMID:28598417
Ruthmann, Katja; Schacht, Annekathrin
2017-01-01
Abstract Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants’ significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. PMID:28541505
Disinhibition outside receptive fields in the visual cortex.
Walker, Gary A; Ohzawa, Izumi; Freeman, Ralph D
2002-07-01
By definition, the region outside the classical receptive field (CRF) of a neuron in the visual cortex does not directly activate the cell. However, the response of a neuron can be influenced by stimulation of the surrounding area. In previous work, we showed that this influence is mainly suppressive and that it is generally limited to a local region outside the CRF. In the experiments reported here, we investigate the mechanisms of the suppressive effect. Our approach is to find the position of a grating patch that is most effective in suppressing the response of a cell. We then use a masking stimulus at different contrasts over the grating patch in an attempt to disinhibit the response. We find that suppressive effects may be partially or completely reversed by use of the masking stimulus. This disinhibition suggests that effects from outside the CRF may be local. Although they do not necessarily underlie the perceptual analysis of a figure-ground visual scene, they may provide a substrate for this process.
Possible Quantum Absorber Effects in Cortical Synchronization
NASA Astrophysics Data System (ADS)
Kämpf, Uwe
The Wheeler-Feynman transactional "absorber" approach was proposed originally to account for anomalous resonance coupling between spatio-temporally distant measurement partners in entangled quantum states of so-called Einstein-Podolsky-Rosen paradoxes, e.g. of spatio-temporal non-locality, quantum teleportation, etc. Applied to quantum brain dynamics, however, this view provides an anticipative resonance coupling model for aspects of cortical synchronization and recurrent visual action control. It is proposed to consider the registered activation patterns of neuronal loops in so-called synfire chains not as a result of retarded brain communication processes, but rather as surface effects of a system of standing waves generated in the depth of visual processing. According to this view, they arise from a counterbalance between the actual input's delayed bottom-up data streams and top-down recurrent information-processing of advanced anticipative signals in a Wheeler-Feynman-type absorber mode. In the framework of a "time-loop" model, findings about mirror neurons in the brain cortex are suggested to be at least partially associated with temporal rather than spatial mirror functions of visual processing, similar to phase conjugate adaptive resonance-coupling in nonlinear optics.
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
An object-based visual attention model for robotic applications.
Yu, Yuanlong; Mann, George K I; Gosine, Raymond G
2010-10-01
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
Multilevel depth and image fusion for human activity detection.
Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng
2013-10-01
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Visualization of flow during cleaning process on a liquid nanofibrous filter
NASA Astrophysics Data System (ADS)
Bílek, P.
2017-10-01
This paper deals with visualization of flow during cleaning process on a nanofibrous filter. Cleaning of a filter is very important part of the filtration process which extends lifetime of the filter and improve filtration properties. Cleaning is carried out on flat-sheet filters, where particles are deposited on the filter surface and form a filtration cake. The cleaning process dislodges the deposited filtration cake, which is loose from the membrane surface to the retentate flow. The blocked pores in the filter are opened again and hydrodynamic properties are restored. The presented optical method enables to see flow behaviour in a thin laser sheet on the inlet side of a tested filter during the cleaning process. The local concentration of solid particles is possible to estimate and achieve new information about the cleaning process. In the article is described the cleaning process on nanofibrous membranes for waste water treatment. The hydrodynamic data were compared to the images of the cleaning process.
Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão
2018-05-24
A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.
Local sleep homeostasis in the avian brain: convergence of sleep function in mammals and birds?
Lesku, John A; Vyssotski, Alexei L; Martinez-Gonzalez, Dolores; Wilzeck, Christiane; Rattenborg, Niels C
2011-08-22
The function of the brain activity that defines slow wave sleep (SWS) and rapid eye movement (REM) sleep in mammals is unknown. During SWS, the level of electroencephalogram slow wave activity (SWA or 0.5-4.5 Hz power density) increases and decreases as a function of prior time spent awake and asleep, respectively. Such dynamics occur in response to waking brain use, as SWA increases locally in brain regions used more extensively during prior wakefulness. Thus, SWA is thought to reflect homeostatically regulated processes potentially tied to maintaining optimal brain functioning. Interestingly, birds also engage in SWS and REM sleep, a similarity that arose via convergent evolution, as sleeping reptiles and amphibians do not show similar brain activity. Although birds deprived of sleep show global increases in SWA during subsequent sleep, it is unclear whether avian sleep is likewise regulated locally. Here, we provide, to our knowledge, the first electrophysiological evidence for local sleep homeostasis in the avian brain. After staying awake watching David Attenborough's The Life of Birds with only one eye, SWA and the slope of slow waves (a purported marker of synaptic strength) increased only in the hyperpallium--a primary visual processing region--neurologically connected to the stimulated eye. Asymmetries were specific to the hyperpallium, as the non-visual mesopallium showed a symmetric increase in SWA and wave slope. Thus, hypotheses for the function of mammalian SWS that rely on local sleep homeostasis may apply also to birds.
AstroVis: Visualizing astronomical data cubes
NASA Astrophysics Data System (ADS)
Finniss, Stephen; Tyler, Robin; Questiaux, Jacques
2016-08-01
AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.
Mealor, Andy D; Simner, Julia; Rothen, Nicolas; Carmichael, Duncan A; Ward, Jamie
2016-01-01
We developed the Sussex Cognitive Styles Questionnaire (SCSQ) to investigate visual and verbal processing preferences and incorporate global/local processing orientations and systemising into a single, comprehensive measure. In Study 1 (N = 1542), factor analysis revealed six reliable subscales to the final 60 item questionnaire: Imagery Ability (relating to the use of visual mental imagery in everyday life); Technical/Spatial (relating to spatial mental imagery, and numerical and technical cognition); Language & Word Forms; Need for Organisation; Global Bias; and Systemising Tendency. Thus, we replicate previous findings that visual and verbal styles are separable, and that types of imagery can be subdivided. We extend previous research by showing that spatial imagery clusters with other abstract cognitive skills, and demonstrate that global/local bias can be separated from systemising. Study 2 validated the Technical/Spatial and Language & Word Forms factors by showing that they affect performance on memory tasks. In Study 3, we validated Imagery Ability, Technical/Spatial, Language & Word Forms, Global Bias, and Systemising Tendency by issuing the SCSQ to a sample of synaesthetes (N = 121) who report atypical cognitive profiles on these subscales. Thus, the SCSQ consolidates research from traditionally disparate areas of cognitive science into a comprehensive cognitive style measure, which can be used in the general population, and special populations.
Mealor, Andy D.; Simner, Julia; Rothen, Nicolas; Carmichael, Duncan A.; Ward, Jamie
2016-01-01
We developed the Sussex Cognitive Styles Questionnaire (SCSQ) to investigate visual and verbal processing preferences and incorporate global/local processing orientations and systemising into a single, comprehensive measure. In Study 1 (N = 1542), factor analysis revealed six reliable subscales to the final 60 item questionnaire: Imagery Ability (relating to the use of visual mental imagery in everyday life); Technical/Spatial (relating to spatial mental imagery, and numerical and technical cognition); Language & Word Forms; Need for Organisation; Global Bias; and Systemising Tendency. Thus, we replicate previous findings that visual and verbal styles are separable, and that types of imagery can be subdivided. We extend previous research by showing that spatial imagery clusters with other abstract cognitive skills, and demonstrate that global/local bias can be separated from systemising. Study 2 validated the Technical/Spatial and Language & Word Forms factors by showing that they affect performance on memory tasks. In Study 3, we validated Imagery Ability, Technical/Spatial, Language & Word Forms, Global Bias, and Systemising Tendency by issuing the SCSQ to a sample of synaesthetes (N = 121) who report atypical cognitive profiles on these subscales. Thus, the SCSQ consolidates research from traditionally disparate areas of cognitive science into a comprehensive cognitive style measure, which can be used in the general population, and special populations. PMID:27191169
Infants' Visual Localization of Visual and Auditory Targets.
ERIC Educational Resources Information Center
Bechtold, A. Gordon; And Others
This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…
Schendan, Haline E.; Ganis, Giorgio
2015-01-01
People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701
A Novel System for Visualizing Alphavirus Assembly
Steel, J. Jordan; Geiss, Brian J.
2015-01-01
Alphaviruses are small, enveloped RNA viruses that form infectious particles by budding through the cellular plasma membrane. To help visualize and understand the intracellular assembly of alphavirus virions we have developed a bimolecular fluorescence complementation-based system (BiFC) that allows visualization of capsid and E2 subcellular localization and association in live cells. In this system, N- or C-terminal Venus fluorescent protein fragments (VN- and VC-) are fused to the N-terminus of the capsid protein on the Sindbis virus structural polyprotein, which results in the formation of fluorescent capsid-like structures in the absence of viral genomes that associate with the plasma membrane of cells. Mutation of the capsid autoprotease active site blocks structural polyprotein processing and alters the subcellular distribution of capsid fluorescence. Incorporating mCherry into the extracellular domain of the E2 glycoprotein allows the visualization of E2 glycoprotein localization and showed a close association of the E2 and capsid proteins at the plasma membrane as expected. These results suggest that this system is a useful new tool to study alphavirus assembly in live cells and may be useful in identifying molecules that inhibit alphavirus virion formation. PMID:26122073
NASA Astrophysics Data System (ADS)
Kümmel, Stephan
Being able to visualize the dynamics of electrons in organic materials is a fascinating perspective. Simulations based on time-dependent density functional theory allow to realize this hope, as they visualize the flow of charge through molecular structures in real-space and real-time. We here present results on two fundamental processes: Photoemission from organic semiconductor molecules and charge transport through molecular structures. In the first part we demonstrate that angular resolved photoemission intensities - from both theory and experiment - can often be interpreted as a visualization of molecular orbitals. However, counter-intuitive quantum-mechanical electron dynamics such as emission perpendicular to the direction of the electrical field can substantially alter the picture, adding surprising features to the molecular orbital interpretation. In a second study we calculate the flow of charge through conjugated molecules. The calculations show in real time how breaks in the conjugation can lead to a local buildup of charge and the formation of local electrical dipoles. These can interact with neighboring molecular chains. As a consequence, collections of ''molecular electrical wires'' can show distinctly different characteristics than ''classical electrical wires''. German Science Foundation GRK 1640.
Feature-selective attention enhances color signals in early visual areas of the human brain.
Müller, M M; Andersen, S; Trujillo, N J; Valdés-Sosa, P; Malinowski, P; Hillyard, S A
2006-09-19
We used an electrophysiological measure of selective stimulus processing (the steady-state visual evoked potential, SSVEP) to investigate feature-specific attention to color cues. Subjects viewed a display consisting of spatially intermingled red and blue dots that continually shifted their positions at random. The red and blue dots flickered at different frequencies and thereby elicited distinguishable SSVEP signals in the visual cortex. Paying attention selectively to either the red or blue dot population produced an enhanced amplitude of its frequency-tagged SSVEP, which was localized by source modeling to early levels of the visual cortex. A control experiment showed that this selection was based on color rather than flicker frequency cues. This signal amplification of attended color items provides an empirical basis for the rapid identification of feature conjunctions during visual search, as proposed by "guided search" models.
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Sensory processing patterns predict the integration of information held in visual working memory.
Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne
2016-02-01
Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Local field potentials and border ownership: A conjecture about computation in visual cortex.
Zucker, Steven W
2012-01-01
Border ownership is an intermediate-level visual task: it must integrate (upward flowing) image information about edges with (downward flowing) shape information. This highlights the familiar local-to-global aspect of border formation (linking of edge elements to form contours) with the much less studied global-to-local aspect (which edge elements form part of the same shape). To address this task we show how to incorporate certain high-level notions of distance and geometric arrangement into a form that can influence image-based edge information. The center of the argument is a reaction-diffusion equation that reveals how (global) aspects of the distance map (that is, shape) can be "read out" locally, suggesting a solution to the border ownership problem. Since the reaction-diffusion equation defines a field, a possible information processing role for the local field potential can be defined. We argue that such fields also underlie the Gestalt notion of closure, especially when it is refined using modern experimental techniques. An important implication of this theoretical argument is that, if true, then network modeling must be extended to include the substrate surrounding spiking neurons, including glia. Copyright © 2012 Elsevier Ltd. All rights reserved.
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Mundy, Matthew E
2014-01-01
Explanations for the cognitive basis of the Müller-Lyer illusion are still frustratingly mixed. To date, Day's (1989) theory of perceptual compromise has received little empirical attention. In this study, we examine the merit of Day's hypothesis for the Müller-Lyer illusion by biasing participants toward global or local visual processing through exposure to Navon (1977) stimuli, which are known to alter processing level preference for a short time. Participants (N = 306) were randomly allocated to global, local, or control conditions. Those in global or local conditions were exposed to Navon stimuli for 5 min and participants were required to report on the global or local stimulus features, respectively. Subsequently, participants completed a computerized Müller-Lyer experiment where they adjusted the length of a line to match an illusory-figure. The illusion was significantly stronger for participants with a global bias, and significantly weaker for those with a local bias, compared with the control condition. These findings provide empirical support for Day's "conflicting cues" theory of perceptual compromise in the Müller-Lyer illusion.
Drake, Jennifer E.; Winner, Ellen
2009-01-01
A local processing bias in the block design task and in drawing strategy has been used to account for realistic drawing skill in individuals with autism. We investigated whether the same kind of local processing bias is seen in typically developing children with unusual skill in realistic graphic representation. Forty-three 5–11-year-olds who drew a still life completed a version of the block design task in both standard and segmented form, were tested for their memory for the block design items, and were given the Kaufmann Brief Intelligence Test-II. Children were classified as gifted, moderately gifted or typical on the basis of the level of realism in their drawings. Similar to autistic individuals, the gifted group showed a local processing bias in the block design task. But unlike autistic individuals, the gifted group showed a global advantage in the visual memory task and did not use a local drawing strategy; in addition, their graphic realism skill was related to verbal IQ. Differences in the extent of local processing bias in autistic and typically developing children with drawing talent are discussed. PMID:19528030
The taste-visual cross-modal Stroop effect: An event-related brain potential study.
Xiao, X; Dupuis-Roy, N; Yang, X L; Qiu, J F; Zhang, Q L
2014-03-28
Event-related potentials (ERPs) were recorded to explore, for the first time, the electrophysiological correlates of the taste-visual cross-modal Stroop effect. Eighteen healthy participants were presented with a taste stimulus and a food image, and asked to categorize the image as "sweet" or "sour" by pressing the relevant button as quickly as possible. Accurate categorization of the image was faster when it was presented with a congruent taste stimulus (e.g., sour taste/image of lemon) than with an incongruent one (e.g., sour taste/image of ice cream). ERP analyses revealed a negative difference component (ND430-620) between 430 and 620ms in the taste-visual cross-modal Stroop interference. Dipole source analysis of the difference wave (incongruent minus congruent) indicated that two generators localized in the prefrontal cortex and the parahippocampal gyrus contributed to this taste-visual cross-modal Stroop effect. This result suggests that the prefrontal cortex is associated with the process of conflict control in the taste-visual cross-modal Stroop effect. Also, we speculate that the parahippocampal gyrus is associated with the process of discordant information in the taste-visual cross-modal Stroop effect. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Experience-Driven Formation of Parts-Based Representations in a Model of Layered Visual Memory
Jitsev, Jenia; von der Malsburg, Christoph
2009-01-01
Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces. PMID:19862345
Visual and Experiential Learning Opportunities through Geospatial Data
NASA Astrophysics Data System (ADS)
Gardiner, N.; Bulletins, S.
2007-12-01
Global observation data from satellites are essential for both research and education about Earth's climate because they help convey the temporal and spatial scales inherent to the subject, which are beyond most people's experience. Experts in the development of visualizations using spatial data distinguish the process of learning through data exploration from the process of learning by absorbing a story told from beginning to end. The former requires the viewer to absorb complex spatial and temporal dynamics inherent to visualized data and therefore is a process best undertaken by those familiar with the data and processes represented. The latter requires that the viewer understand the intended presentation of concepts, so story telling can be employed to educate viewers with varying backgrounds and familiarity with a given subject. Three examples of climate science education, drawn from the current science program Science Bulletins (American Museum of Natural History, New York, USA), demonstrate the power of visualized global earth observations for climate science education. The first example seeks to explain the potential for sea level rise on a global basis. A short feature film includes the visualized, projected effects of sea level rise at local to global scales; this visualization complements laboratory and field observations of glacier retreat and paleoclimatic reconstructions based on fossilized coral reef analysis, each of which is also depicted in the film. The narrative structure keeps learners focused on discrete scientific concepts. The second example utilizes half-hourly cloud observations to demonstrate weather and climate patterns to audiences on a global basis. Here, the scientific messages are qualitatively simpler, but the viewer must deduce his own complex visual understanding of the visualized data. Finally, we present plans for distributing climate science education products via mediated public events whereby participants learn from climate and geovisualization experts working collaboratively. This last example provides an opportunity for deep exploration of patterns and processes in a live setting and makes full use of complementary talents, including computer science, internet-enabled data sharing, remote sensing image processing, and meteorology. These innovative examples from informal educators serve as powerful pedagogical models to consider for the classroom of the future.
Bayır, Şafak
2016-01-01
With the advances in the computer field, methods and techniques in automatic image processing and analysis provide the opportunity to detect automatically the change and degeneration in retinal images. Localization of the optic disc is extremely important for determining the hard exudate lesions or neovascularization, which is the later phase of diabetic retinopathy, in computer aided eye disease diagnosis systems. Whereas optic disc detection is fairly an easy process in normal retinal images, detecting this region in the retinal image which is diabetic retinopathy disease may be difficult. Sometimes information related to optic disc and hard exudate information may be the same in terms of machine learning. We presented a novel approach for efficient and accurate localization of optic disc in retinal images having noise and other lesions. This approach is comprised of five main steps which are image processing, keypoint extraction, texture analysis, visual dictionary, and classifier techniques. We tested our proposed technique on 3 public datasets and obtained quantitative results. Experimental results show that an average optic disc detection accuracy of 94.38%, 95.00%, and 90.00% is achieved, respectively, on the following public datasets: DIARETDB1, DRIVE, and ROC. PMID:27110272
Adaptive non-local smoothing-based weberface for illumination-insensitive face recognition
NASA Astrophysics Data System (ADS)
Yao, Min; Zhu, Changming
2017-07-01
Compensating the illumination of a face image is an important process to achieve effective face recognition under severe illumination conditions. This paper present a novel illumination normalization method which specifically considers removing the illumination boundaries as well as reducing the regional illumination. We begin with the analysis of the commonly used reflectance model and then expatiate the hybrid usage of adaptive non-local smoothing and the local information coding based on Weber's law. The effectiveness and advantages of this combination are evidenced visually and experimentally. Results on Extended YaleB database show its better performance than several other famous methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Yi; Cai, Zhonghou; Chen, Pice
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase seperated regions. The ability to simultanousely track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of- the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO 2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation ismore » initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO 2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, which is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO 2. Lastly, the direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.« less
NASA Astrophysics Data System (ADS)
Levy, S.
2013-12-01
Public places such as parks, urban plazas, transportation centers and educational institutions offer the opportunity to reach many people in the course of daily life. Yet these public spaces are often devoid of any substantive information about the local environment and natural processes that have shaped it. Art is a particularly effective means to visualize environmental phenomena. Art has the ability to translate the processes of nature into visual information that communicates with clarity and beauty. People often have no connection to the world through which they walk: no sense of their place in the local watershed or where the rainwater goes once it hits the ground. Creating an awareness of place is critical first step for people to understand the changes in their world. Art can be a gateway for understanding geo-scientific concepts that are not frequently made accessible in a visual manner And art requires scientific knowledge to inform an accurate visualization of nature. Artists must collaborate with scientists in order to create art that informs the public about environmental processes. There is a new current in the design world that combines art and technology to create artful solutions to site issues such as storm water runoff, periodic flooding and habitat destruction. Instead of being considered functionless, art is now given a chance to do some real work on the site. This new combination of function and aesthetic concerns will have a major impact on how site issues are perceived. Site concerns that were once considered obstacles can become opportunities to visualize and celebrate how problems can be solved. This sort of artful solutions requires teamwork across many disciplines. In my presentation I will speak about various ways of I have visualized the invisible processes of the natural world in my projects. I will share eight of my permanent and temporary art commissions that are collaborations with scientists and engineers. These works reveal wetland habitats, tides, prevailing winds, rain and microorganisms, and water pollution. In examining each project I will detail the essential collaborations with scientists and engineers that brought the projects to fruition. I will discuss how the cross-discipline approach of scientists, engineers and designers made effective and artful solutions to site issues, and created visually stimulating and educational places. I will also look at the role of truth and metaphor in art and compare how accuracy and data collection have differing thresholds in art and in science.
Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya
2012-03-01
The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.
Hübner, Ronald; Volberg, Gregor
2005-06-01
This article presents and tests the authors' integration hypothesis of global/local processing, which proposes that at early stages of processing, the identities of global and local units of a hierarchical stimulus are represented separately from information about their respective levels and that, therefore, identity and level information have to be integrated at later stages. It further states that the cerebral hemispheres differ in their capacities for these binding processes. Three experiments are reported in which the integration hypothesis was tested. Participants had to identify a letter at a prespecified level with the viewing duration restricted by a mask. False reporting of the letter at the nontarget level was predicted to occur more often when the integration of identity and level could fail. This was the case. Moreover, visual-field effects occurred, as expected. Finally, a multinomial model was constructed and fitted to the data. ((c) 2005 APA, all rights reserved).
Matching cue size and task properties in exogenous attention.
Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet
2013-01-01
Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.
Audio-visual integration through the parallel visual pathways.
Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond
2015-10-22
Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
In Internet-Based Visualization System Study about Breakthrough Applet Security Restrictions
NASA Astrophysics Data System (ADS)
Chen, Jie; Huang, Yan
In the process of realization Internet-based visualization system of the protein molecules, system needs to allow users to use the system to observe the molecular structure of the local computer, that is, customers can generate the three-dimensional graphics from PDB file on the client computer. This requires Applet access to local file, related to the Applet security restrictions question. In this paper include two realization methods: 1.Use such as signature tools, key management tools and Policy Editor tools provided by the JDK to digital signature and authentication for Java Applet, breakthrough certain security restrictions in the browser. 2. Through the use of Servlet agent implement indirect access data methods, breakthrough the traditional Java Virtual Machine sandbox model restriction of Applet ability. The two ways can break through the Applet's security restrictions, but each has its own strengths.
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
Mihalas, Stefan; Dong, Yi; von der Heydt, Rüdiger; Niebur, Ernst
2011-01-01
Visual attention is often understood as a modulatory field acting at early stages of processing, but the mechanisms that direct and fit the field to the attended object are not known. We show that a purely spatial attention field propagating downward in the neuronal network responsible for perceptual organization will be reshaped, repositioned, and sharpened to match the object's shape and scale. Key features of the model are grouping neurons integrating local features into coherent tentative objects, excitatory feedback to the same local feature neurons that caused grouping neuron activation, and inhibition between incompatible interpretations both at the local feature level and at the object representation level. PMID:21502489
Katagiri, Masatoshi; Kasai, Tetsuko; Kamio, Yoko; Murohashi, Harumitsu
2013-02-01
The purpose of the present study was to determine whether individuals with Asperger's disorder exhibit difficulty in switching attention from a local level to a global level. Eleven participants with Asperger's disorder and 11 age- and gender-matched healthy controls performed a level-repetition switching task using Navon-type hierarchical stimuli. In both groups, level-repetition was beneficial at both levels. Furthermore, individuals with Asperger's disorder exhibited difficulty in switching attention from a local level to a global level compared to control individuals. These findings suggested that there is a problem with the inhibitory mechanism that influences the output of enhanced local visual processing in Asperger's disorder.
Milner, A D; Paulignan, Y; Dijkerman, H C; Michel, F; Jeannerod, M
1999-11-07
We tested a patient (A. T.) with bilateral brain damage to the parietal lobes, whose resulting 'optic ataxia' causes her to make large pointing errors when asked to locate single light emitting diodes presented in her visual field. We report here that, unlike normal individuals, A. T.'s pointing accuracy improved when she was required to wait for 5 s before responding. This counter-intuitive result is interpreted as reflecting the very brief time-scale on which visuomotor control systems in the superior parietal lobe operate. When an immediate response was required, A. T.'s damaged visuomotor system caused her to make large errors; but when a delay was required, a different, more flexible, visuospatial coding system--presumably relatively intact in her brain--came into play, resulting in much more accurate responses. The data are consistent with a dual processing theory whereby motor responses made directly to visual stimuli are guided by a dedicated system in the superior parietal and premotor cortices, while responses to remembered stimuli depend on perceptual processing and may thus crucially involve processing within the temporal neocortex.
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.
Differential contribution of early visual areas to the perceptual process of contour processing.
Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A
2004-04-01
We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.
Shih, Wenting; Yamada, Soichiro
2011-12-22
Traditionally, cell migration has been studied on two-dimensional, stiff plastic surfaces. However, during important biological processes such as wound healing, tissue regeneration, and cancer metastasis, cells must navigate through complex, three-dimensional extracellular tissue. To better understand the mechanisms behind these biological processes, it is important to examine the roles of the proteins responsible for driving cell migration. Here, we outline a protocol to study the mechanisms of cell migration using the epithelial cell line (MDCK), and a three-dimensional, fibrous, self-polymerizing matrix as a model system. This optically clear extracellular matrix is easily amenable to live-cell imaging studies and better mimics the physiological, soft tissue environment. This report demonstrates a technique for directly visualizing protein localization and dynamics, and deformation of the surrounding three-dimensional matrix. Examination of protein localization and dynamics during cellular processes provides key insight into protein functions. Genetically encoded fluorescent tags provide a unique method for observing protein localization and dynamics. Using this technique, we can analyze the subcellular accumulation of key, force-generating cytoskeletal components in real-time as the cell maneuvers through the matrix. In addition, using multiple fluorescent tags with different wavelengths, we can examine the localization of multiple proteins simultaneously, thus allowing us to test, for example, whether different proteins have similar or divergent roles. Furthermore, the dynamics of fluorescently tagged proteins can be quantified using Fluorescent Recovery After Photobleaching (FRAP) analysis. This measurement assays the protein mobility and how stably bound the proteins are to the cytoskeletal network. By combining live-cell imaging with the treatment of protein function inhibitors, we can examine in real-time the changes in the distribution of proteins and morphology of migrating cells. Furthermore, we also combine live-cell imaging with the use of fluorescent tracer particles embedded within the matrix to visualize the matrix deformation during cell migration. Thus, we can visualize how a migrating cell distributes force-generating proteins, and where the traction forces are exerted to the surrounding matrix. Through these techniques, we can gain valuable insight into the roles of specific proteins and their contributions to the mechanisms of cell migration.
Sedgemoor: A Suitable Case for Treatment? Heritage, Interpretation and Educational Process.
ERIC Educational Resources Information Center
Thompson, Lynne
A partnership between the Universities of Exeter and Bournemouth at their joint University Centre in Yeovil College, Somerset (England) allowed local students to participate in higher education via a BA degree in Heritage and Regional Studies. This program represents several disciplines including history, literature, and the visual arts. It aims…
Liu, Li-Ping; Deng, Zi-Niu; Qu, Jin-Wang; Yan, Jia-Wen; Catara, Vittoria; Li, Da-Zhi; Long, Gui-You; Li, Na
2012-09-01
Xanthomonas axonopodis pv. citri (Xac) is the causal agent of citrus bacterial canker, an economically important disease to world citrus industry. To monitor the infection process of Xac in different citrus plants, the enhanced green florescent protein (EGFP) visualizing system was constructed to visualize the propagation and localization in planta. First, the wild-type Xac was isolated from the diseased leaves of susceptible 'Bingtang' sweet orange, and then the isolated Xac was labeled with EGFP by triparental mating. After PCR identification, the growth kinetics and pathogenicity of the transformants were analyzed in comparison with the wild-type Xac. The EGFP-labeled bacteria were inoculated by spraying on the surface and infiltration in the mesophyll of 'Bingtang' sweet orange leaves. The bacterial cell multiplication and diffusion processes were observed directly under confocal laser scanning microscope at different intervals after inoculation. The results indicated that the EGFP-labeled Xac releasing clear green fluorescence light under fluorescent microscope showed the infection process and had the same pathogenicity as the wild type to citrus. Consequently, the labeled Xac demonstrated the ability as an efficient tool to monitor the pathogen infection.
[Several mechanisms of visual gnosis disorders in local brain lesions].
Meerson, Ia A
1981-01-01
The object of the studies were peculiarities of recognizing visual images by patients with local cerebral lesions under conditions of incomplete sets of the image features, disjunction of the latter, distortion of their spatial arrangement, and unusual spatial orientation of the image as a whole. It was found that elimination of even one essential feature sharply hampered the recognition of the image both by healthy individuals (control), and patients with extraoccipital lesions, whereas elimination of several nonessential features only slowed down the process. In distinction from this the difficulties of the recognition of incomplete images by patients with occipital lesions were directly proportional to the number of the eliminated features irrespective of the latters' significance, i.e. these patients were unable to evaluate the hierarchy of the features. The recognition process in these patients were followed the way of scanning individual features. The reaccumulation and summation. The recognition of the fragmental, spatially distorted and unusually oriented images was found to be affected selectively in patients with parietal lobe affections. The patients with occipital lesions recognized such images practically as good as the ordinary ones.
Martens, Ulla; Hübner, Ronald
2013-03-01
While hemispheric differences in global/local processing have been reported by various studies, it is still under dispute at which processing stage they occur. Primarily, it was assumed that these asymmetries originate from an early perceptual stage. Instead, the content-level binding theory (Hübner & Volberg, 2005) suggests that the hemispheres differ at a later stage at which the stimulus information is bound to its respective level. The present study tested this assumption by means of steady-state evoked potentials (SSVEPs). In particular, we presented hierarchical letters flickering at 12Hz while participants categorised the letters at a pre- cued level (global or local). The information at the two levels could be congruent or incongruent with respect to the required response. Since content-binding is only necessary if there is a response conflict, asymmetric hemispheric processing should be observed only for incongruent stimuli. Indeed, our results show that the cue and congruent stimuli elicited equal SSVEP global/local effects in both hemispheres. In contrast, incongruent stimuli elicited lower SSVEP amplitudes for a local than for a global target level at left posterior electrodes, whereas a reversed pattern was seen at right hemispheric electrodes. These findings provide further evidence for a level-specific hemispheric advantage with respect to content-level binding. Moreover, the fact that the SSVEP is sensitive to these processes offers the possibility to separately track global and local processing by presenting both level contents with different frequencies. Copyright © 2012 Elsevier Inc. All rights reserved.
Van Eylen, Lien; Boets, Bart; Cosemans, Nele; Peeters, Hilde; Steyaert, Jean; Wagemans, Johan; Noens, Ilse
2017-03-01
Heterogeneity within autism spectrum disorder (ASD) hampers insight in the etiology and stimulates the search for endophenotypes. Endophenotypes should meet several criteria, the most important being the association with ASD and the higher occurrence rate in unaffected ASD relatives than in the general population. We evaluated these criteria for executive functioning (EF) and local-global (L-G) visual processing. By administering an extensive cognitive battery which increases the validity of the measures, we examined which of the cognitive anomalies shown by ASD probands also occur in their unaffected relatives (n = 113) compared to typically developing (TD) controls (n = 100). Microarrays were performed, so we could exclude relatives from probands with a de novo mutation in a known ASD susceptibility copy number variant, thus increasing the probability that genetic risk variants are shared by the ASD relatives. An overview of studies investigating EF and L-G processing in ASD relatives was also provided. For EF, ASD relatives - like ASD probands - showed impairments in response inhibition, cognitive flexibility and generativity (specifically, ideational fluency), and EF impairments in daily life. For L-G visual processing, the ASD relatives showed no anomalies on the tasks, but they reported more attention to detail in daily life. Group differences were similar for siblings and for parents of ASD probands, and yielded larger effect sizes in a multiplex subsample. The group effect sizes for the comparison between ASD probands and TD individuals were generally larger than those of the ASD relatives compared to TD individuals. Impaired cognitive flexibility, ideational fluency and response inhibition are strong candidate endophenotypes for ASD. They could help to delineate etiologically more homogeneous subgroups, which is clinically important to allow assigning ASD probands to different, more targeted, interventions. © 2016 Association for Child and Adolescent Mental Health.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Neural time course of visually enhanced echo suppression.
Bishop, Christopher W; London, Sam; Miller, Lee M
2012-10-01
Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.
Subjective visual perception: from local processing to emergent phenomena of brain activity.
Panagiotaropoulos, Theofanis I; Kapoor, Vishal; Logothetis, Nikos K
2014-05-05
The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness.
Subjective visual perception: from local processing to emergent phenomena of brain activity
Panagiotaropoulos, Theofanis I.; Kapoor, Vishal; Logothetis, Nikos K.
2014-01-01
The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness. PMID:24639588
Calderone, Daniel J.; Hoptman, Matthew J.; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J.; Bar, Moshe; Javitt, Daniel C.; Butler, Pamela D.
2013-01-01
Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The “frame and fill” model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object “framing” circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia. PMID:22735157
Topological visual mapping in robotics.
Romero, Anna; Cazorla, Miguel
2012-08-01
A key problem in robotics is the construction of a map from its environment. This map could be used in different tasks, like localization, recognition, obstacle avoidance, etc. Besides, the simultaneous location and mapping (SLAM) problem has had a lot of interest in the robotics community. This paper presents a new method for visual mapping, using topological instead of metric information. For that purpose, we propose prior image segmentation into regions in order to group the extracted invariant features in a graph so that each graph defines a single region of the image. Although others methods have been proposed for visual SLAM, our method is complete, in the sense that it makes all the process: it presents a new method for image matching; it defines a way to build the topological map; and it also defines a matching criterion for loop-closing. The matching process will take into account visual features and their structure using the graph transformation matching (GTM) algorithm, which allows us to process the matching and to remove out the outliers. Then, using this image comparison method, we propose an algorithm for constructing topological maps. During the experimentation phase, we will test the robustness of the method and its ability constructing topological maps. We have also introduced new hysteresis behavior in order to solve some problems found building the graph.
Accessing and Visualizing scientific spatiotemporal data
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, Bruce G.; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia;
2004-01-01
This paper discusses work done by JPL 's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids These tools do one or more of the following tasks visualize local data sets for local users, visualize local data sets for remote users, and access and visualize remote data sets The tools are used for various types of data, including remotely sensed image data, digital elevation models, astronomical surveys, etc The paper attempts to pull some common elements out of these tools that may be useful for others who have to work with similarly large data sets.
Ensemble perception of color in autistic adults.
Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna
2017-05-01
Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Strength and coherence of binocular rivalry depends on shared stimulus complexity.
Alais, David; Melcher, David
2007-01-01
Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.
Ensemble perception of color in autistic adults
Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna
2016-01-01
Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263
Bayesian networks and information theory for audio-visual perception modeling.
Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis
2010-09-01
Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.
Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka
2015-01-01
Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.
Flow visualization and modeling for education and outreach in low-income countries
NASA Astrophysics Data System (ADS)
Motanated, K.
2016-12-01
Being able to visualize the dynamic interaction between the movement of water and sediment flux is undeniably a profound tool for students and novices to understand complicated earth surface processes. In a laser-sheet flow visualization technique, a light source that is thin and monochromatic is required to illuminate sediments or tracers in the flow. However, an ideal laser sheet generator is rather expensive, especially for schools and universities residing in low-income countries. This project is proposing less-expensive options for a laser-sheet source and flow visualization experiment configuration for qualitative observation and quantitative analysis of the interaction between fluid media and sediments. Here, Fresnel lens is used to convert from point laser into sheet laser. Multiple combinations of laser diodes of various wavelength (nanometer) and power (milliwatt) and Fresnel lenses of various dimensions are analyzed. The pair that is able to produce the thinnest and brightest light sheet is not only effective but also affordable. The motion of sediments in a flow can be observed by illuminating the laser-sheet in an interested flow region. The particle motion is recorded by a video camera that is capable of taking multiple frames per second and having a narrow depth of view. The recorded video file can be played in a slow-motion mode so students can visually observe and qualitatively analyze the particle motion. An open source software package for Particle Imaging Velocimetry (PIV) can calculate the local velocity of particles from still images extracted from the video and create a vector map depicting particle motion. This flow visualization experiment is inexpensive and the configuration is simple to setup. Most importantly, this flow visualization technique serves as a fundamental tool for earth surface process education and can further be applied to sedimentary process modeling.
Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren
2012-10-01
Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.
Developmental changes in the neural influence of sublexical information on semantic processing.
Lee, Shu-Hui; Booth, James R; Chou, Tai-Li
2015-07-01
Functional magnetic resonance imaging (fMRI) was used to examine the developmental changes in a group of normally developing children (aged 8-12) and adolescents (aged 13-16) during semantic processing. We manipulated association strength (i.e. a global reading unit) and semantic radical (i.e. a local reading unit) to explore the interaction of lexical and sublexical semantic information in making semantic judgments. In the semantic judgment task, two types of stimuli were used: visually-similar (i.e. shared a semantic radical) versus visually-dissimilar (i.e. did not share a semantic radical) character pairs. Participants were asked to indicate if two Chinese characters, arranged according to association strength, were related in meaning. The results showed greater developmental increases in activation in left angular gyrus (BA 39) in the visually-similar compared to the visually-dissimilar pairs for the strong association. There were also greater age-related increases in angular gyrus for the strong compared to weak association in the visually-similar pairs. Both of these results suggest that shared semantics at the sublexical level facilitates the integration of overlapping features at the lexical level in older children. In addition, there was a larger developmental increase in left posterior middle temporal gyrus (BA 21) for the weak compared to strong association in the visually-dissimilar pairs, suggesting conflicting sublexical information placed greater demands on access to lexical representations in the older children. All together, these results suggest that older children are more sensitive to sublexical information when processing lexical representations. Copyright © 2015 Elsevier Ltd. All rights reserved.
VRML and Collaborative Environments: New Tools for Networked Visualization
NASA Astrophysics Data System (ADS)
Crutcher, R. M.; Plante, R. L.; Rajlich, P.
We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.
NASA Astrophysics Data System (ADS)
Song, Yongchen; Hao, Min; Zhao, Yuechao; Zhang, Liang
2014-12-01
In this study, the dual-chamber pressure decay method and magnetic resonance imaging (MRI) were used to dynamically visualize the gas diffusion process in liquid-saturated porous media, and the relationship of concentration-distance for gas diffusing into liquid-saturated porous media at different times were obtained by MR images quantitative analysis. A non-iterative finite volume method was successfully applied to calculate the local gas diffusion coefficient in liquid-saturated porous media. The results agreed very well with the conventional pressure decay method, thus it demonstrates that the method was feasible of determining the local diffusion coefficient of gas in liquid-saturated porous media at different times during diffusion process.
The impact of recreational MDMA 'ecstasy' use on global form processing.
White, Claire; Edwards, Mark; Brown, John; Bell, Jason
2014-11-01
The ability to integrate local orientation information into a global form percept was investigated in long-term ecstasy users. Evidence suggests that ecstasy disrupts the serotonin system, with the visual areas of the brain being particularly susceptible. Previous research has found altered orientation processing in the primary visual area (V1) of users, thought to be due to disrupted serotonin-mediated lateral inhibition. The current study aimed to investigate whether orientation deficits extend to higher visual areas involved in global form processing. Forty-five participants completed a psychophysical (Glass pattern) study allowing an investigation into the mechanisms underlying global form processing and sensitivity to changes in the offset of the stimuli (jitter). A subgroup of polydrug-ecstasy users (n=6) with high ecstasy use had significantly higher thresholds for the detection of Glass patterns than controls (n=21, p=0.039) after Bonferroni correction. There was also a significant interaction between jitter level and drug-group, with polydrug-ecstasy users showing reduced sensitivity to alterations in jitter level (p=0.003). These results extend previous research, suggesting disrupted global form processing and reduced sensitivity to orientation jitter with ecstasy use. Further research is needed to investigate this finding in a larger sample of heavy ecstasy users and to differentiate the effects of other drugs. © The Author(s) 2014.
Avian visual behavior and the organization of the telencephalon.
Shimizu, Toru; Patton, Tadd B; Husband, Scott A
2010-01-01
Birds have excellent visual abilities that are comparable or superior to those of primates, but how the bird brain solves complex visual problems is poorly understood. More specifically, we lack knowledge about how such superb abilities are used in nature and how the brain, especially the telencephalon, is organized to process visual information. Here we review the results of several studies that examine the organization of the avian telencephalon and the relevance of visual abilities to avian social and reproductive behavior. Video playback and photographic stimuli show that birds can detect and evaluate subtle differences in local facial features of potential mates in a fashion similar to that of primates. These techniques have also revealed that birds do not attend well to global configural changes in the face, suggesting a fundamental difference between birds and primates in face perception. The telencephalon plays a major role in the visual and visuo-cognitive abilities of birds and primates, and anatomical data suggest that these animals may share similar organizational characteristics in the visual telencephalon. As is true in the primate cerebral cortex, different visual features are processed separately in the avian telencephalon where separate channels are organized in the anterior-posterior axis roughly parallel to the major laminae. Furthermore, the efferent projections from the primary visual telencephalon form an extensive column-like continuum involving the dorsolateral pallium and the lateral basal ganglia. Such a column-like organization may exist not only for vision, but for other sensory modalities and even for a continuum that links sensory and limbic areas of the avian brain. Behavioral and neural studies must be integrated in order to understand how birds have developed their amazing visual systems through 150 million years of evolution. 2010 S. Karger AG, Basel.
Avian Visual Behavior and the Organization of the Telencephalon
Shimizu, Toru; Patton, Tadd B.; Husband, Scott A.
2010-01-01
Birds have excellent visual abilities that are comparable or superior to those of primates, but how the bird brain solves complex visual problems is poorly understood. More specifically, we lack knowledge about how such superb abilities are used in nature and how the brain, especially the telencephalon, is organized to process visual information. Here we review the results of several studies that examine the organization of the avian telencephalon and the relevance of visual abilities to avian social and reproductive behavior. Video playback and photographic stimuli show that birds can detect and evaluate subtle differences in local facial features of potential mates in a fashion similar to that of primates. These techniques have also revealed that birds do not attend well to global configural changes in the face, suggesting a fundamental difference between birds and primates in face perception. The telencephalon plays a major role in the visual and visuo-cognitive abilities of birds and primates, and anatomical data suggest that these animals may share similar organizational characteristics in the visual telencephalon. As is true in the primate cerebral cortex, different visual features are processed separately in the avian telencephalon where separate channels are organized in the anterior-posterior axis roughly parallel to the major laminae. Furthermore, the efferent projections from the primary visual telencephalon form an extensive column-like continuum involving the dorsolateral pallium and the lateral basal ganglia. Such a column-like organization may exist not only for vision, but for other sensory modalities and even for a continuum that links sensory and limbic areas of the avian brain. Behavioral and neural studies must be integrated in order to understand how birds have developed their amazing visual systems through 150 million years of evolution. PMID:20733296
Lobier, Muriel; Palva, J Matias; Palva, Satu
2018-01-15
Visuospatial attention prioritizes processing of attended visual stimuli. It is characterized by lateralized alpha-band (8-14 Hz) amplitude suppression in visual cortex and increased neuronal activity in a network of frontal and parietal areas. It has remained unknown what mechanisms coordinate neuronal processing among frontoparietal network and visual cortices and implement the attention-related modulations of alpha-band amplitudes and behavior. We investigated whether large-scale network synchronization could be such a mechanism. We recorded human cortical activity with magnetoencephalography (MEG) during a visuospatial attention task. We then identified the frequencies and anatomical networks of inter-areal phase synchronization from source localized MEG data. We found that visuospatial attention is associated with robust and sustained long-range synchronization of cortical oscillations exclusively in the high-alpha (10-14 Hz) frequency band. This synchronization connected frontal, parietal and visual regions and was observed concurrently with amplitude suppression of low-alpha (6-9 Hz) band oscillations in visual cortex. Furthermore, stronger high-alpha phase synchronization was associated with decreased reaction times to attended stimuli and larger suppression of alpha-band amplitudes. These results thus show that high-alpha band phase synchronization is functionally significant and could coordinate the neuronal communication underlying the implementation of visuospatial attention. Copyright © 2017 Elsevier Inc. All rights reserved.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.
Zhou, Yang; Wu, Dewei
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
2016-01-01
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Spatial mapping and profiling of metabolite distributions during germination
Feenstra, Adam D.; Alexander, Liza E.; Song, Zhihong; ...
2017-06-20
Germination is a highly complex process by which seeds begin to develop and establish themselves as viable organisms. In this paper, we utilize a combination of GC-MS, LC-fluorescence, and mass spectrometry imaging (MSI) approaches to profile and visualize the metabolic distributions of germinating seeds from two different inbreds of maize seeds, B73 and Mo17. GC and LC analyses demonstrate that the two inbreds are highly differentiated in their metabolite profiles throughout the course of germination, especially with regard to amino acids, sugar alcohols, and small organic acids. Crude dissection of the seed followed by GC-MS analysis of polar metabolites alsomore » revealed that many compounds were highly sequestered among the various seed tissue types. To further localize compounds, matrix-assisted laser desorption/ionization MSI is utilized to visualize compounds in fine detail in their native environments over the course of germination. Most notably, the fatty acyl chain-dependent differential localization of phospholipids and TAGs were observed within the embryo and radicle, showing correlation with the heterogeneous distribution of fatty acids. Furthermore, other interesting observations include unusual localization of ceramides on the endosperm/scutellum boundary, and subcellular localization of ferulate in the aleurone.« less
A spatio-temporal model of the human observer for use in display design
NASA Astrophysics Data System (ADS)
Bosman, Dick
1989-08-01
A "quick look" visual model, a kind of standard observer in software, is being developed to estimate the appearance of new display designs before prototypes are built. It operates on images also stored in software. It is assumed that the majority of display design flaws and technology artefacts can be identified in representations of early visual processing, and insight obtained into very local to global (supra-threshold) brightness distributions. Cognitive aspects are not considered because it seems that poor acceptance of technology and design is only weakly coupled to image content.
NASA Astrophysics Data System (ADS)
Szatmári, Gábor; Pásztor, László
2016-04-01
Uncertainty is a general term expressing our imperfect knowledge in describing an environmental process and we are aware of it (Bárdossy and Fodor, 2004). Sampling, laboratory measurements, models and so on are subject to uncertainty. Effective quantification and visualization of uncertainty would be indispensable to stakeholders (e.g. policy makers, society). Soil related features and their spatial models should be stressfully targeted to uncertainty assessment because their inferences are further used in modelling and decision making process. The aim of our present study was to assess and effectively visualize the local uncertainty of the countrywide soil organic matter (SOM) spatial distribution model of Hungary using geostatistical tools and concepts. The Hungarian Soil Information and Monitoring System's SOM data (approximately 1,200 observations) and environmental related, spatially exhaustive secondary information (i.e. digital elevation model, climatic maps, MODIS satellite images and geological map) were used to model the countrywide SOM spatial distribution by regression kriging. It would be common to use the calculated estimation (or kriging) variance as a measure of uncertainty, however the normality and homoscedasticity hypotheses have to be refused according to our preliminary analysis on the data. Therefore, a normal score transformation and a sequential stochastic simulation approach was introduced to be able to model and assess the local uncertainty. Five hundred equally probable realizations (i.e. stochastic images) were generated. The number of the stochastic images is fairly enough to provide a model of uncertainty at each location, which is a complete description of uncertainty in geostatistics (Deutsch and Journel, 1998). Furthermore, these models can be applied e.g. to contour the probability of any events, which can be regarded as goal oriented digital soil maps and are of interest for agricultural management and decision making as well. A standardized measure of the local entropy was used to visualize uncertainty, where entropy values close to 1 correspond to high uncertainty, whilst values close to 0 correspond low uncertainty. The advantage of the usage of local entropy in this context is that it combines probabilities from multiple members into a single number for each location of the model. In conclusion, it is straightforward to use a sequential stochastic simulation approach to the assessment of uncertainty, when normality and homoscedasticity are violated. The visualization of uncertainty using the local entropy is effective and communicative to stakeholders because it represents the uncertainty through a single number within a [0, 1] scale. References: Bárdossy, Gy. & Fodor, J., 2004. Evaluation of Uncertainties and Risks in Geology. Springer-Verlag, Berlin Heidelberg. Deutsch, C.V. & Journel, A.G., 1998. GSLIB: geostatistical software library and user's guide. Oxford University Press, New York. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.
Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu
2018-05-01
Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.
Tools for visually exploring biological networks.
Suderman, Matthew; Hallett, Michael
2007-10-15
Many tools exist for visually exploring biological networks including well-known examples such as Cytoscape, VisANT, Pathway Studio and Patika. These systems play a key role in the development of integrative biology, systems biology and integrative bioinformatics. The trend in the development of these tools is to go beyond 'static' representations of cellular state, towards a more dynamic model of cellular processes through the incorporation of gene expression data, subcellular localization information and time-dependent behavior. We provide a comprehensive review of the relative advantages and disadvantages of existing systems with two goals in mind: to aid researchers in efficiently identifying the appropriate existing tools for data visualization; to describe the necessary and realistic goals for the next generation of visualization tools. In view of the first goal, we provide in the Supplementary Material a systematic comparison of more than 35 existing tools in terms of over 25 different features. Supplementary data are available at Bioinformatics online.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
Hakone, Anzu; Harrison, Lane; Ottley, Alvitta; Winters, Nathan; Gutheil, Caitlin; Han, Paul K J; Chang, Remco
2017-01-01
Prostate cancer is the most common cancer among men in the US, and yet most cases represent localized cancer for which the optimal treatment is unclear. Accumulating evidence suggests that the available treatment options, including surgery and conservative treatment, result in a similar prognosis for most men with localized prostate cancer. However, approximately 90% of patients choose surgery over conservative treatment, despite the risk of severe side effects like erectile dysfunction and incontinence. Recent medical research suggests that a key reason is the lack of patient-centered tools that can effectively communicate personalized risk information and enable them to make better health decisions. In this paper, we report the iterative design process and results of developing the PROgnosis Assessment for Conservative Treatment (PROACT) tool, a personalized health risk communication tool for localized prostate cancer patients. PROACT utilizes two published clinical prediction models to communicate the patients' personalized risk estimates and compare treatment options. In collaboration with the Maine Medical Center, we conducted two rounds of evaluations with prostate cancer survivors and urologists to identify the design elements and narrative structure that effectively facilitate patient comprehension under emotional distress. Our results indicate that visualization can be an effective means to communicate complex risk information to patients with low numeracy and visual literacy. However, the visualizations need to be carefully chosen to balance readability with ease of comprehension. In addition, due to patients' charged emotional state, an intuitive narrative structure that considers the patients' information need is critical to aid the patients' comprehension of their risk information.
Intelligent visual localization of wireless capsule endoscopes enhanced by color information.
Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2017-10-01
Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhu, Yi; Cai, Zhonghou; Chen, Pice; ...
2016-02-26
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase seperated regions. The ability to simultanousely track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of- the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO 2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation ismore » initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO 2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, which is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO 2. Lastly, the direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.« less
NASA Astrophysics Data System (ADS)
Zhu, Yi; Cai, Zhonghou; Chen, Pice; Zhang, Qingteng; Highland, Matthew J.; Jung, Il Woong; Walko, Donald A.; Dufresne, Eric M.; Jeong, Jaewoo; Samant, Mahesh G.; Parkin, Stuart S. P.; Freeland, John W.; Evans, Paul G.; Wen, Haidan
2016-02-01
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase separated regions. The ability to simultaneously track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of-the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation is initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, and is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO2. The direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.
Zhu, Yi; Cai, Zhonghou; Chen, Pice; Zhang, Qingteng; Highland, Matthew J; Jung, Il Woong; Walko, Donald A; Dufresne, Eric M; Jeong, Jaewoo; Samant, Mahesh G; Parkin, Stuart S P; Freeland, John W; Evans, Paul G; Wen, Haidan
2016-02-26
Dynamical phase separation during a solid-solid phase transition poses a challenge for understanding the fundamental processes in correlated materials. Critical information underlying a phase transition, such as localized phase competition, is difficult to reveal by measurements that are spatially averaged over many phase separated regions. The ability to simultaneously track the spatial and temporal evolution of such systems is essential to understanding mesoscopic processes during a phase transition. Using state-of-the-art time-resolved hard x-ray diffraction microscopy, we directly visualize the structural phase progression in a VO2 film upon photoexcitation. Following a homogenous in-plane optical excitation, the phase transformation is initiated at discrete sites and completed by the growth of one lattice structure into the other, instead of a simultaneous isotropic lattice symmetry change. The time-dependent x-ray diffraction spatial maps show that the in-plane phase progression in laser-superheated VO2 is via a displacive lattice transformation as a result of relaxation from an excited monoclinic phase into a rutile phase. The speed of the phase front progression is quantitatively measured, and is faster than the process driven by in-plane thermal diffusion but slower than the sound speed in VO2. The direct visualization of localized structural changes in the time domain opens a new avenue to study mesoscopic processes in driven systems.
Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.
Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.
Right Hemispheric Dominance in Gaze-Triggered Reflexive Shift of Attention in Humans
ERIC Educational Resources Information Center
Okada, Takashi; Sato, Wataru; Toichi, Motomi
2006-01-01
Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects,…
ERIC Educational Resources Information Center
Dillen, Claudia; Steyaert, Jean; Op de Beeck, Hans P.; Boets, Bart
2015-01-01
The embedded figures test has often been used to reveal weak central coherence in individuals with autism spectrum disorder (ASD). Here, we administered a more standardized automated version of the embedded figures test in combination with the configural superiority task, to investigate the effect of contextual modulation on local feature…
Kai Nils Nitzsche; Gernot Verch; Katrin Premke; Arthur Gessler; Zachary Kayler
2016-01-01
Crop fields are cultivated across continuities of soil, topography, and local climate that drive biological processes and nutrient cycling at the landscape scale; yet land management and agricultural research are often performed at the field scale, potentially neglecting the context of the surrounding landscape. Adding to this complexity is the overlap of ecosystems...
'What' and 'where' in the human brain.
Ungerleider, L G; Haxby, J V
1994-04-01
Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.
Lightness computation by the human visual system
NASA Astrophysics Data System (ADS)
Rudd, Michael E.
2017-05-01
A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
Most of today’s visualization libraries and applications are based off of what is known today as the visualization pipeline. In the visualization pipeline model, algorithms are encapsulated as “filtering” components with inputs and outputs. These components can be combined by connecting the outputs of one filter to the inputs of another filter. The visualization pipeline model is popular because it provides a convenient abstraction that allows users to combine algorithms in powerful ways. Unfortunately, the visualization pipeline cannot run effectively on exascale computers. Experts agree that the exascale machine will comprise processors that contain many cores. Furthermore, physical limitations willmore » prevent data movement in and out of the chip (that is, between main memory and the processing cores) from keeping pace with improvements in overall compute performance. To use these processors to their fullest capability, it is essential to carefully consider memory access. This is where the visualization pipeline fails. Each filtering component in the visualization library is expected to take a data set in its entirety, perform some computation across all of the elements, and output the complete results. The process of iterating over all elements must be repeated in each filter, which is one of the worst possible ways to traverse memory when trying to maximize the number of executions per memory access. This project investigates a new type of visualization framework that exhibits a pervasive parallelism necessary to run on exascale machines. Our framework achieves this by defining algorithms in terms of functors, which are localized, stateless operations. Functors can be composited in much the same way as filters in the visualization pipeline. But, functors’ design allows them to be concurrently running on massive amounts of lightweight threads. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.« less
Diversity in spatial scope of contrast adaptation among mouse retinal ganglion cells.
Khani, Mohammad Hossein; Gollisch, Tim
2017-12-01
Retinal ganglion cells adapt to changes in visual contrast by adjusting their response kinetics and sensitivity. While much work has focused on the time scales of these adaptation processes, less is known about the spatial scale of contrast adaptation. For example, do small, localized contrast changes affect a cell's signal processing across its entire receptive field? Previous investigations have provided conflicting evidence, suggesting that contrast adaptation occurs either locally within subregions of a ganglion cell's receptive field or globally over the receptive field in its entirety. Here, we investigated the spatial extent of contrast adaptation in ganglion cells of the isolated mouse retina through multielectrode-array recordings. We applied visual stimuli so that ganglion cell receptive fields contained regions where the average contrast level changed periodically as well as regions with constant average contrast level. This allowed us to analyze temporal stimulus integration and sensitivity separately for stimulus regions with and without contrast changes. We found that the spatial scope of contrast adaptation depends strongly on cell identity, with some ganglion cells displaying clear local adaptation, whereas others, in particular large transient ganglion cells, adapted globally to contrast changes. Thus, the spatial scope of contrast adaptation in mouse retinal ganglion cells appears to be cell-type specific. This could reflect differences in mechanisms of contrast adaptation and may contribute to the functional diversity of different ganglion cell types. NEW & NOTEWORTHY Understanding whether adaptation of a neuron in a sensory system can occur locally inside the receptive field or whether it always globally affects the entire receptive field is important for understanding how the neuron processes complex sensory stimuli. For mouse retinal ganglion cells, we here show that both local and global contrast adaptation exist and that this diversity in spatial scope can contribute to the functional diversity of retinal ganglion cell types. Copyright © 2017 the American Physiological Society.
Diversity in spatial scope of contrast adaptation among mouse retinal ganglion cells
Khani, Mohammad Hossein
2017-01-01
Retinal ganglion cells adapt to changes in visual contrast by adjusting their response kinetics and sensitivity. While much work has focused on the time scales of these adaptation processes, less is known about the spatial scale of contrast adaptation. For example, do small, localized contrast changes affect a cell’s signal processing across its entire receptive field? Previous investigations have provided conflicting evidence, suggesting that contrast adaptation occurs either locally within subregions of a ganglion cell’s receptive field or globally over the receptive field in its entirety. Here, we investigated the spatial extent of contrast adaptation in ganglion cells of the isolated mouse retina through multielectrode-array recordings. We applied visual stimuli so that ganglion cell receptive fields contained regions where the average contrast level changed periodically as well as regions with constant average contrast level. This allowed us to analyze temporal stimulus integration and sensitivity separately for stimulus regions with and without contrast changes. We found that the spatial scope of contrast adaptation depends strongly on cell identity, with some ganglion cells displaying clear local adaptation, whereas others, in particular large transient ganglion cells, adapted globally to contrast changes. Thus, the spatial scope of contrast adaptation in mouse retinal ganglion cells appears to be cell-type specific. This could reflect differences in mechanisms of contrast adaptation and may contribute to the functional diversity of different ganglion cell types. NEW & NOTEWORTHY Understanding whether adaptation of a neuron in a sensory system can occur locally inside the receptive field or whether it always globally affects the entire receptive field is important for understanding how the neuron processes complex sensory stimuli. For mouse retinal ganglion cells, we here show that both local and global contrast adaptation exist and that this diversity in spatial scope can contribute to the functional diversity of retinal ganglion cell types. PMID:28904106
Cerebral laterality and verbal processes.
Sherman, J L; Kulhavy, R W; Burns, K
1976-11-01
Research suggests that we process information by way of two distinct and functionallly separate coding systems. The localization of these two processing systems appears to be somewhat dependent on cerebral laterality, which has been shown to vary in right-handed and left-handed persons. To test the dual coding model, right-handed and left-handed subjects learned lists of abstract and concrete words under various conditions of visual and tactile interference. Right-handed subjects showed a significant superiority in the remembering of highly concrete items, while total recall did not differ reliably between groups.
Barraza, Paulo; Chavez, Mario; Rodríguez, Eugenio
2016-01-01
Similar to linguistic stimuli, music can also prime the meaning of a subsequent word. However, it is so far unknown what is the brain dynamics underlying the semantic priming effect induced by music, and its relation to language. To elucidate these issues, we compare the brain oscillatory response to visual words that have been semantically primed either by a musical excerpt or by an auditory sentence. We found that semantic violation between music-word pairs triggers a classical ERP N400, and induces a sustained increase of long-distance theta phase synchrony, along with a transient increase of local gamma activity. Similar results were observed after linguistic semantic violation except for gamma activity, which increased after semantic congruence between sentence-word pairs. Our findings indicate that local gamma activity is a neural marker that signals different ways of semantic processing between music and language, revealing the dynamic and self-organized nature of the semantic processing. Copyright © 2015 Elsevier Inc. All rights reserved.
Internal curvature signal and noise in low- and high-level vision
Grabowecky, Marcia; Kim, Yee Joon; Suzuki, Satoru
2011-01-01
How does internal processing contribute to visual pattern perception? By modeling visual search performance, we estimated internal signal and noise relevant to perception of curvature, a basic feature important for encoding of three-dimensional surfaces and objects. We used isolated, sparse, crowded, and face contexts to determine how internal curvature signal and noise depended on image crowding, lateral feature interactions, and level of pattern processing. Observers reported the curvature of a briefly flashed segment, which was presented alone (without lateral interaction) or among multiple straight segments (with lateral interaction). Each segment was presented with no context (engaging low-to-intermediate-level curvature processing), embedded within a face context as the mouth (engaging high-level face processing), or embedded within an inverted-scrambled-face context as a control for crowding. Using a simple, biologically plausible model of curvature perception, we estimated internal curvature signal and noise as the mean and standard deviation, respectively, of the Gaussian-distributed population activity of local curvature-tuned channels that best simulated behavioral curvature responses. Internal noise was increased by crowding but not by face context (irrespective of lateral interactions), suggesting prevention of noise accumulation in high-level pattern processing. In contrast, internal curvature signal was unaffected by crowding but modulated by lateral interactions. Lateral interactions (with straight segments) increased curvature signal when no contextual elements were added, but equivalent interactions reduced curvature signal when each segment was presented within a face. These opposing effects of lateral interactions are consistent with the phenomena of local-feature contrast in low-level processing and global-feature averaging in high-level processing. PMID:21209356
Development of climate data storage and processing model
NASA Astrophysics Data System (ADS)
Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.
2016-11-01
We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.
CellMap visualizes protein-protein interactions and subcellular localization
Dallago, Christian; Goldberg, Tatyana; Andrade-Navarro, Miguel Angel; Alanis-Lobato, Gregorio; Rost, Burkhard
2018-01-01
Many tools visualize protein-protein interaction (PPI) networks. The tool introduced here, CellMap, adds one crucial novelty by visualizing PPI networks in the context of subcellular localization, i.e. the location in the cell or cellular component in which a PPI happens. Users can upload images of cells and define areas of interest against which PPIs for selected proteins are displayed (by default on a cartoon of a cell). Annotations of localization are provided by the user or through our in-house database. The visualizer and server are written in JavaScript, making CellMap easy to customize and to extend by researchers and developers. PMID:29497493
Cardillo, Ramona; Mammarella, Irene C; Garcia, Ricardo Basso; Cornoldi, Cesare
2017-05-01
Visuo-constructive and perceptual abilities have been poorly investigated in children with learning disabilities. The present study focused on local or global visuospatial processing in children with nonverbal learning disability (NLD) and dyslexia compared with typically-developing (TD) controls. Participants were presented with a modified block design task (BDT), in both a typical visuo-constructive version that involves reconstructing figures from blocks, and a perceptual version in which respondents must rapidly match unfragmented figures with a corresponding fragmented target figure. The figures used in the tasks were devised by manipulating two variables: the perceptual cohesiveness and the task uncertainty, stimulating global or local processes. Our results confirmed that children with NLD had more problems with the visuo-constructive version of the task, whereas those with dyslexia showed only a slight difficulty with the visuo-constructive version, but were in greater difficulty with the perceptual version, especially in terms of response times. These findings are interpreted in relation to the slower visual processing speed of children with dyslexia, and to the visuo-constructive problems and difficulty in using flexibly-experienced global vs local processes of children with NLD. The clinical and educational implications of these findings are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Black, Emily; Stevenson, Jennifer L; Bish, Joel P
2017-08-01
The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.
Using endemic road features to create self-explaining roads and reduce vehicle speeds.
Charlton, Samuel G; Mackie, Hamish W; Baas, Peter H; Hay, Karen; Menezes, Miguel; Dixon, Claire
2010-11-01
This paper describes a project undertaken to establish a self-explaining roads (SER) design programme on existing streets in an urban area. The methodology focussed on developing a process to identify functional road categories and designs based on endemic road characteristics taken from functional exemplars in the study area. The study area was divided into two sections, one to receive SER treatments designed to maximise visual differences between road categories, and a matched control area to remain untreated for purposes of comparison. The SER design for local roads included increased landscaping and community islands to limit forward visibility, and removal of road markings to create a visually distinct road environment. In comparison, roads categorised as collectors received increased delineation, addition of cycle lanes, and improved amenity for pedestrians. Speed data collected 3 months after implementation showed a significant reduction in vehicle speeds on local roads and increased homogeneity of speeds on both local and collector roads. The objective speed data, combined with residents' speed choice ratings, indicated that the project was successful in creating two discriminably different road categories. 2010 Elsevier Ltd. All rights reserved.
Salience from the decision perspective: You know where it is before you know it is there.
Zehetleitner, Michael; Müller, Hermann J
2010-12-31
In visual search for feature contrast ("odd-one-out") singletons, identical manipulations of salience, whether by varying target-distractor similarity or dimensional redundancy of target definition, had smaller effects on reaction times (RTs) for binary localization decisions than for yes/no detection decisions. According to formal models of binary decisions, identical differences in drift rates would yield larger RT differences for slow than for fast decisions. From this principle and the present findings, it follows that decisions on the presence of feature contrast singletons are slower than decisions on their location. This is at variance with two classes of standard models of visual search and object recognition that assume a serial cascade of first detection, then localization and identification of a target object, but also inconsistent with models assuming that as soon as a target is detected all its properties, spatial as well as non-spatial (e.g., its category), are available immediately. As an alternative, we propose a model of detection and localization tasks based on random walk processes, which can account for the present findings.
Puller, Christian; Rieke, Fred; Neitz, Jay; Neitz, Maureen
2015-01-01
At early stages of visual processing, receptive fields are typically described as subtending local regions of space and thus performing computations on a narrow spatial scale. Nevertheless, stimulation well outside of the classical receptive field can exert clear and significant effects on visual processing. Given the distances over which they occur, the retinal mechanisms responsible for these long-range effects would certainly require signal propagation via active membrane properties. Here the physiology of a wide-field amacrine cell—the wiry cell—in macaque monkey retina is explored, revealing receptive fields that represent a striking departure from the classic structure. A single wiry cell integrates signals over wide regions of retina, 5–10 times larger than the classic receptive fields of most retinal ganglion cells. Wiry cells integrate signals over space much more effectively than predicted from passive signal propagation, and spatial integration is strongly attenuated during blockade of NMDA spikes but integration is insensitive to blockade of NaV channels with TTX. Thus these cells appear well suited for contributing to the long-range interactions of visual signals that characterize many aspects of visual perception. PMID:26133804
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.
Kok, Peter; de Lange, Floris P
2014-07-07
An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Vidal, Juan R.; Perrone-Bertolotti, Marcela; Kahane, Philippe; Lachaux, Jean-Philippe
2015-01-01
If conscious perception requires global information integration across active distant brain networks, how does the loss of conscious perception affect neural processing in these distant networks? Pioneering studies on perceptual suppression (PS) described specific local neural network responses in primary visual cortex, thalamus and lateral prefrontal cortex of the macaque brain. Yet the neural effects of PS have rarely been studied with intracerebral recordings outside these cortices and simultaneously across distant brain areas. Here, we combined (1) a novel experimental paradigm in which we produced a similar perceptual disappearance and also re-appearance by using visual adaptation with transient contrast changes, with (2) electrophysiological observations from human intracranial electrodes sampling wide brain areas. We focused on broadband high-frequency (50–150 Hz, i.e., gamma) and low-frequency (8–24 Hz) neural activity amplitude modulations related to target visibility and invisibility. We report that low-frequency amplitude modulations reflected stimulus visibility in a larger ensemble of recording sites as compared to broadband gamma responses, across distinct brain regions including occipital, temporal and frontal cortices. Moreover, the dynamics of the broadband gamma response distinguished stimulus visibility from stimulus invisibility earlier in anterior insula and inferior frontal gyrus than in temporal regions, suggesting a possible role of fronto-insular cortices in top–down processing for conscious perception. Finally, we report that in primary visual cortex only low-frequency amplitude modulations correlated directly with perceptual status. Interestingly, in this sensory area broadband gamma was not modulated during PS but became positively modulated after 300 ms when stimuli were rendered visible again, suggesting that local networks could be ignited by top–down influences during conscious perception. PMID:25642199
User Localization During Human-Robot Interaction
Alonso-Martín, F.; Gorostiza, Javi F.; Malfaz, María; Salichs, Miguel A.
2012-01-01
This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented. PMID:23012577
User localization during human-robot interaction.
Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A
2012-01-01
This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.
Spatio-temporal dynamics of processing non-symbolic number: An ERP source localization study
Hyde, Daniel C.; Spelke, Elizabeth S.
2013-01-01
Coordinated studies with adults, infants, and nonhuman animals provide evidence for two distinct systems of non-verbal number representation. The ‘parallel individuation’ system selects and retains information about 1–3 individual entities and the ‘numerical magnitude’ system establishes representations of the approximate cardinal value of a group. Recent ERP work has demonstrated that these systems reliably evoke functionally and temporally distinct patterns of brain response that correspond to established behavioral signatures. However, relatively little is known about the neural generators of these ERP signatures. To address this question, we targeted known ERP signatures of these systems, by contrasting processing of small versus large non-symbolic numbers, and used a source localization algorithm (LORETA) to identify their cortical origins. Early processing of small numbers, showing the signature effects of parallel individuation on the N1 (∼150 ms), was localized primarily to extrastriate visual regions. In contrast, qualitatively and temporally distinct processing of large numbers, showing the signatures of approximate number representation on the mid-latency P2p (∼200–250 ms), was localized primarily to right intraparietal regions. In comparison, mid-latency small number processing was localized to the right temporal-parietal junction and left-lateralized intraparietal regions. These results add spatial information to the emerging ERP literature documenting the process by which we represent number. Furthermore, these results substantiate recent claims that early attentional processes determine whether a collection of objects will be represented through parallel individuation or as an approximate numerical magnitude by providing evidence that downstream processing diverges to distinct cortical regions. PMID:21830257
Hyde, Daniel C; Spelke, Elizabeth S
2012-09-01
Coordinated studies with adults, infants, and nonhuman animals provide evidence for two distinct systems of nonverbal number representation. The "parallel individuation" (PI) system selects and retains information about one to three individual entities and the "numerical magnitude" system establishes representations of the approximate cardinal value of a group. Recent event-related potential (ERP) work has demonstrated that these systems reliably evoke functionally and temporally distinct patterns of brain response that correspond to established behavioral signatures. However, relatively little is known about the neural generators of these ERP signatures. To address this question, we targeted known ERP signatures of these systems, by contrasting processing of small versus large nonsymbolic numbers, and used a source localization algorithm (LORETA) to identify their cortical origins. Early processing of small numbers, showing the signature effects of PI on the N1 (∼150 ms), was localized primarily to extrastriate visual regions. In contrast, qualitatively and temporally distinct processing of large numbers, showing the signatures of approximate number representation on the mid-latency P2p (∼200-250 ms), was localized primarily to right intraparietal regions. In comparison, mid-latency small number processing was localized to the right temporal-parietal junction and left-lateralized intraparietal regions. These results add spatial information to the emerging ERP literature documenting the process by which we represent number. Furthermore, these results substantiate recent claims that early attentional processes determine whether a collection of objects will be represented through PI or as an approximate numerical magnitude by providing evidence that downstream processing diverges to distinct cortical regions. Copyright © 2011 Wiley Periodicals, Inc.
The primary visual cortex in the neural circuit for visual orienting
NASA Astrophysics Data System (ADS)
Zhaoping, Li
The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.
Switching between global and local levels: the level repetition effect and its hemispheric asymmetry
Kéïta, Luc; Bedoin, Nathalie; Burack, Jacob A.; Lepore, Franco
2014-01-01
The global level of hierarchical stimuli (Navon’s stimuli) is typically processed quicker and better than the local level; further differential hemispheric dominance is described for local (left hemisphere, LH) and global (right hemisphere, RH) processing. However, neuroimaging and behavioral data indicate that stimulus category (letter or object) could modulate the hemispheric asymmetry for the local level processing. Besides, when the targets are unpredictably displayed at the global or local level, the participant has to switch between levels, and the magnitude of the switch cost increases with the number of repeated-level trials preceding the switch. The hemispheric asymmetries associated with level switching is an unresolved issue. LH areas may be involved in carrying over the target level information in case of level repetition. These areas may also largely participate in the processing of level-changed trials. Here we hypothesized that RH areas underly the inhibitory mechanism performed on the irrelevant level, as one of the components of the level switching process. In an experiment using a within-subject design, hierarchical stimuli were briefly presented either to the right or to the left visual field. 32 adults were instructed to identify the target at the global or local level. We assessed a possible RH dominance for the non-target level inhibition by varying the attentional demands through the manipulation of level repetitions (two or gour repeated-level trials before the switch). The behavioral data confirmed a LH specialization only for the local level processing of letter-based stimuli, and detrimental effect of increased level repetitions before a switch. Further, data provides evidence for a RH advantage in inhibiting the non-target level. Taken together, the data supports the notion of the existence of multiple mechanisms underlying level-switch effects. PMID:24723903
Cowley, Benjamin; Lukander, Kristian
2016-01-01
Background: Recognition of objects and their context relies heavily on the integrated functioning of global and local visual processing. In a realistic setting such as work, this processing becomes a sustained activity, implying a consequent interaction with executive functions. Motivation: There have been many studies of either global-local attention or executive functions; however it is relatively novel to combine these processes to study a more ecological form of attention. We aim to explore the phenomenon of global-local processing during a task requiring sustained attention and working memory. Methods: We develop and test a novel protocol for global-local dissociation, with task structure including phases of divided (“rule search”) and selective (“rule found”) attention, based on the Wisconsin Card Sorting Task (WCST). We test it in a laboratory study with 25 participants, and report on behavior measures (physiological data was also gathered, but not reported here). We develop novel stimuli with more naturalistic levels of information and noise, based primarily on face photographs, with consequently more ecological validity. Results: We report behavioral results indicating that sustained difficulty when participants test their hypotheses impacts matching-task performance, and diminishes the global precedence effect. Results also show a dissociation between subjectively experienced difficulty and objective dimension of performance, and establish the internal validity of the protocol. Contribution: We contribute an advance in the state of the art for testing global-local attention processes in concert with complex cognition. With three results we establish a connection between global-local dissociation and aspects of complex cognition. Our protocol also improves ecological validity and opens options for testing additional interactions in future work. PMID:26941689
Registering Ground and Satellite Imagery for Visual Localization
2012-08-01
reckoning, inertial, stereo, light detection and ranging ( LIDAR ), cellular radio, and visual. As no sensor or algorithm provides perfect localization in...by metric localization approaches to confine the region of a map that needs to be searched. Simultaneous Localization and Mapping ( SLAM ) (5, 6), using...estimate the metric location of the camera. Se et al. (7) use SIFT features for both appearance-based global localization and incremental 3D SLAM . Johns and
Yarch, Jeff; Federer, Frederick
2017-01-01
Decades of anatomical studies on the primate primary visual cortex (V1) have led to a detailed diagram of V1 intrinsic circuitry, but this diagram lacks information about the output targets of V1 cells. Understanding how V1 local processing relates to downstream processing requires identification of neuronal populations defined by their output targets. In primates, V1 layers (L)2/3 and 4B send segregated projections to distinct cytochrome oxidase (CO) stripes in area V2: neurons in CO blob columns project to thin stripes while neurons outside blob columns project to thick and pale stripes, suggesting functional specialization of V1-to-V2 CO streams. However, the conventional diagram of V1 shows all L4B neurons, regardless of their soma location in blob or interblob columns, as projecting selectively to CO blobs in L2/3, suggesting convergence of blob/interblob information in L2/3 blobs and, possibly, some V2 stripes. However, it is unclear whether all L4B projection neurons show similar local circuitries. Using viral-mediated circuit tracing, we have identified the local circuits of L4B neurons projecting to V2 thick stripes in macaque. Consistent with previous studies, we found the somata of this L4B subpopulation to reside predominantly outside blob columns; however, unlike previous descriptions of local L4B circuits, these cells consistently projected outside CO blob columns in all layers. Thus, the local circuits of these L4B output neurons, just like their extrinsic projections to V2, preserve CO streams. Moreover, the intra-V1 laminar patterns of axonal projections identify two distinct neuron classes within this L4B subpopulation, including a rare novel neuron type, suggestive of two functionally specialized output channels. SIGNIFICANCE STATEMENT Conventional diagrams of primate primary visual cortex (V1) depict neuronal connections within and between different V1 layers, but lack information about the cells' downstream targets. This information is critical to understanding how local processing in V1 relates to downstream processing. We have identified the local circuits of a population of cells in V1 layer (L)4B that project to area V2. These cells' local circuits differ from classical descriptions of L4B circuits in both the laminar and functional compartments targeted by their axons, and identify two neuron classes. Our results demonstrate that both local intra-V1 and extrinsic V1-to-V2 connections of L4B neurons preserve CO-stream segregation, suggesting that across-stream integration occurs downstream of V1, and that output targets dictate local V1 circuitry. PMID:28077720
Striemer, Christopher L; Whitwell, Robert L; Goodale, Melvyn A
2017-11-12
Previous research suggests that the implicit recognition of emotional expressions may be carried out by pathways that bypass primary visual cortex (V1) and project to the amygdala. Some of the strongest evidence supporting this claim comes from case studies of "affective blindsight" in which patients with V1 damage can correctly guess whether an unseen face was depicting a fearful or happy expression. In the current study, we report a new case of affective blindsight in patient MC who is cortically blind following extensive bilateral lesions to V1, as well as face and object processing regions in her ventral visual stream. Despite her large lesions, MC has preserved motion perception which is related to sparing of the motion sensitive region MT+ in both hemispheres. To examine affective blindsight in MC we asked her to perform gender and emotion discrimination tasks in which she had to guess, using a two-alternative forced-choice procedure, whether the face presented was male or female, happy or fearful, or happy or angry. In addition, we also tested MC in a four-alternative forced-choice target localization task. Results indicated that MC was not able to determine the gender of the faces (53% accuracy), or localize targets in a forced-choice task. However, she was able to determine, at above chance levels, whether the face presented was depicting a happy or fearful (67%, p = .006), or a happy or angry (64%, p = .025) expression. Interestingly, although MC was better than chance at discriminating between emotions in faces when asked to make rapid judgments, her performance fell to chance when she was asked to provide subjective confidence ratings about her performance. These data lend further support to the idea that there is a non-conscious visual pathway that bypasses V1 which is capable of processing affective signals from facial expressions without input from higher-order face and object processing regions in the ventral visual stream. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert
2010-10-13
The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing.
NASA Astrophysics Data System (ADS)
Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin
2005-04-01
The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.
Do rats use shape to solve “shape discriminations”?
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141
Disturbed temporal dynamics of brain synchronization in vision loss.
Bola, Michał; Gall, Carolin; Sabel, Bernhard A
2015-06-01
Damage along the visual pathway prevents bottom-up visual input from reaching further processing stages and consequently leads to loss of vision. But perception is not a simple bottom-up process - rather it emerges from activity of widespread cortical networks which coordinate visual processing in space and time. Here we set out to study how vision loss affects activity of brain visual networks and how networks' activity is related to perception. Specifically, we focused on studying temporal patterns of brain activity. To this end, resting-state eyes-closed EEG was recorded from partially blind patients suffering from chronic retina and/or optic-nerve damage (n = 19) and healthy controls (n = 13). Amplitude (power) of oscillatory activity and phase locking value (PLV) were used as measures of local and distant synchronization, respectively. Synchronization time series were created for the low- (7-9 Hz) and high-alpha band (11-13 Hz) and analyzed with three measures of temporal patterns: (i) length of synchronized-/desynchronized-periods, (ii) Higuchi Fractal Dimension (HFD), and (iii) Detrended Fluctuation Analysis (DFA). We revealed that patients exhibit less complex, more random and noise-like temporal dynamics of high-alpha band activity. More random temporal patterns were associated with worse performance in static (r = -.54, p = .017) and kinetic perimetry (r = .47, p = .041). We conclude that disturbed temporal patterns of neural synchronization in vision loss patients indicate disrupted communication within brain visual networks caused by prolonged deafferentation. We propose that because the state of brain networks is essential for normal perception, impaired brain synchronization in patients with vision loss might aggravate the functional consequences of reduced visual input. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi-focused geospatial analysis using probes.
Butkiewicz, Thomas; Dou, Wenwen; Wartell, Zachary; Ribarsky, William; Chang, Remco
2008-01-01
Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
Intracranial Cortical Responses during Visual–Tactile Integration in Humans
Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric
2014-01-01
Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-12-21
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-10-01
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S
2016-08-01
Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd
2009-05-01
Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.
Fillion, Myriam; Lemire, Mélanie; Philibert, Aline; Frenette, Benoît; Weiler, Hope Alberta; Deguire, Jason Robert; Guimarães, Jean Remy Davée; Larribe, Fabrice; Barbosa, Fernando; Mergler, Donna
2013-07-01
Visual functions are known to be sensitive to toxins such as mercury (Hg) and lead (Pb), while omega-3 fatty acids (FA) and selenium (Se) may be protective. In the Tapajós region of the Brazilian Amazon, all of these elements are present in the local diet. Examine how near visual contrast sensitivity and acquired color vision loss vary with biomarkers of toxic exposures (Hg and Pb) and the nutrients Se and omega-3 FA in riverside communities of the Tapajós. Complete visuo-ocular examinations were performed. Near visual contrast sensitivity and color vision were assessed in 228 participants (≥15 years) without diagnosed age-related cataracts or ocular pathologies and with near visual acuity refracted to at least 20/40. Biomarkers of Hg (hair), Pb (blood), Se (plasma), and the omega-3 FAs eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in plasma phospholipids were measured. Multiple linear regressions were used to examine the relations between visual outcomes and biomarkers, taking into account age, sex, drinking and smoking. Reduced contrast sensitivity at all spatial frequencies was associated with hair Hg, while %EPA, and to a lesser extent %EPA+DHA, were associated with better visual function. The intermediate spatial frequency of contrast sensitivity (12 cycles/degree) was negatively related to blood Pb and positively associated with plasma Se. Acquired color vision loss increased with hair Hg and decreased with plasma Se and %EPA. These findings suggest that the local diet of riverside communities of the Amazon contain toxic substances that can have deleterious effects on vision as well as nutrients that are beneficial for visual function. Since remediation at the source is a long process, a better knowledge of the nutrient content and health effects of traditional foods would be useful to minimize harmful effects of Hg and Pb exposure. Copyright © 2013 Elsevier Inc. All rights reserved.
Feature integration and object representations along the dorsal stream visual hierarchy
Perry, Carolyn Jeane; Fallah, Mazyar
2014-01-01
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147
Temporal characteristics of audiovisual information processing.
Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T
2008-05-14
In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.
Reading impairment in schizophrenia: dysconnectivity within the visual system.
Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël
2014-01-01
Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.
[Multifocal visual electrophysiology in visual function evaluation].
Peng, Shu-Ya; Chen, Jie-Min; Liu, Rui-Jue; Zhou, Shu; Liu, Dong-Mei; Xia, Wen-Tao
2013-08-01
Multifocal visual electrophysiology, consisting of multifocal electroretinography (mfERG) and multifocal visual evoked potential (mfVEP), can objectively evaluate retina function and retina-cortical conduction pathway status by stimulating many local retinal regions and obtaining each local response simultaneously. Having many advantages such as short testing time and high sensitivity, it has been widely used in clinical ophthalmology, especially in the diagnosis of retinal disease and glaucoma. It is a new objective technique in clinical forensic medicine involving visual function evaluation of ocular trauma in particular. This article summarizes the way of stimulation, the position of electrodes, the way of analysis, the visual function evaluation of mfERG and mfVEP, and discussed the value of multifocal visual electrophysiology in forensic medicine.
Real-time Mesoscale Visualization of Dynamic Damage and Reaction in Energetic Materials under Impact
NASA Astrophysics Data System (ADS)
Chen, Wayne; Harr, Michael; Kerschen, Nicholas; Maris, Jesus; Guo, Zherui; Parab, Niranjan; Sun, Tao; Fezzaa, Kamel; Son, Steven
Energetic materials may be subjected to impact and vibration loading. Under these dynamic loadings, local stress or strain concentrations may lead to the formation of hot spots and unintended reaction. To visualize the dynamic damage and reaction processes in polymer bonded energetic crystals under dynamic compressive loading, a high speed X-ray phase contrast imaging setup was synchronized with a Kolsky bar and a light gas gun. Controlled compressive loading was applied on PBX specimens with a single or multiple energetic crystal particles and impact-induced damage and reaction processes were captured using the high speed X-ray imaging setup. Impact velocities were systematically varied to explore the critical conditions for reaction. At lower loading rates, ultrasonic exercitations were also applied to progressively damage the crystals, eventually leading to reaction. AFOSR, ONR.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Hexagonal wavelet processing of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.
1993-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Perceptual Averaging in Individuals with Autism Spectrum Disorder.
Corbett, Jennifer E; Venuti, Paola; Melcher, David
2016-01-01
There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.
Interactive Exploration on Large Genomic Datasets.
Tu, Eric
2016-01-01
The prevalence of large genomics datasets has made the the need to explore this data more important. Large sequencing projects like the 1000 Genomes Project [1], which reconstructed the genomes of 2,504 individuals sampled from 26 populations, have produced over 200TB of publically available data. Meanwhile, existing genomic visualization tools have been unable to scale with the growing amount of larger, more complex data. This difficulty is acute when viewing large regions (over 1 megabase, or 1,000,000 bases of DNA), or when concurrently viewing multiple samples of data. While genomic processing pipelines have shifted towards using distributed computing techniques, such as with ADAM [4], genomic visualization tools have not. In this work we present Mango, a scalable genome browser built on top of ADAM that can run both locally and on a cluster. Mango presents a combination of different optimizations that can be combined in a single application to drive novel genomic visualization techniques over terabytes of genomic data. By building visualization on top of a distributed processing pipeline, we can perform visualization queries over large regions that are not possible with current tools, and decrease the time for viewing large data sets. Mango is part of the Big Data Genomics project at University of California-Berkeley [25] and is published under the Apache 2 license. Mango is available at https://github.com/bigdatagenomics/mango.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Visual impairment in Northern Ireland.
Canavan, Y. M.; Jackson, A. J.; Stewart, A.
1997-01-01
Statistics on the registration of blind and partially-sighted patients in Northern Ireland underestimate the true extent of visual impairment within our community. In comparison to other UK regions, where between 0.53% and 0.59% of the population avail of blind or partial sight registration, only 0.35% of residents in Northern Ireland appear on the respective registers. Most patients on the combined registers are in the older age groups and many also suffer from other disabilities. Regional discrepancies may be attributed to a combination of factors including: patient attitudes to the registration process, medical attitudes to registration and local anomalies in the way in which social services departments both record and present annual registration returns. Better liaison is necessary between the community, hospital and voluntary sector providers to improve identification and support services for the visually impaired in the future. PMID:9414937
Patterns in the sky: Natural visualization of aircraft flow fields
NASA Technical Reports Server (NTRS)
Campbell, James F.; Chambers, Joseph R.
1994-01-01
The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.
Implementing WebGL and HTML5 in Macromolecular Visualization and Modern Computer-Aided Drug Design.
Yuan, Shuguang; Chan, H C Stephen; Hu, Zhenquan
2017-06-01
Web browsers have long been recognized as potential platforms for remote macromolecule visualization. However, the difficulty in transferring large-scale data to clients and the lack of native support for hardware-accelerated applications in the local browser undermine the feasibility of such utilities. With the introduction of WebGL and HTML5 technologies in recent years, it is now possible to exploit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin. Many new tools have been developed for biological molecule visualization and modern drug discovery. In contrast to traditional offline tools, real-time computing, interactive data analysis, and cross-platform analyses feature WebGL- and HTML5-based tools, facilitating biological research in a more efficient and user-friendly way. Copyright © 2017 Elsevier Ltd. All rights reserved.
Casellato, Claudia; Pedrocchi, Alessandra; Zorzi, Giovanna; Vernisse, Lea; Ferrigno, Giancarlo; Nardocci, Nardo
2013-05-01
New insights suggest that dystonic motor impairments could also involve a deficit of sensory processing. In this framework, biofeedback, making covert physiological processes more overt, could be useful. The present work proposes an innovative integrated setup which provides the user with an electromyogram (EMG)-based visual-haptic biofeedback during upper limb movements (spiral tracking tasks), to test if augmented sensory feedbacks can induce motor control improvement in patients with primary dystonia. The ad hoc developed real-time control algorithm synchronizes the haptic loop with the EMG reading; the brachioradialis EMG values were used to modify visual and haptic features of the interface: the higher was the EMG level, the higher was the virtual table friction and the background color proportionally moved from green to red. From recordings on dystonic and healthy subjects, statistical results showed that biofeedback has a significant impact, correlated with the local impairment, on the dystonic muscular control. These tests pointed out the effectiveness of biofeedback paradigms in gaining a better specific-muscle voluntary motor control. The flexible tool developed here shows promising prospects of clinical applications and sensorimotor rehabilitation.
Working memory load and the retro-cue effect: A diffusion model account.
Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S
2018-02-01
Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Non-conscious processing of motion coherence can boost conscious access.
Kaunitz, Lisandro; Fracasso, Alessio; Lingnau, Angelika; Melcher, David
2013-01-01
Research on the scope and limits of non-conscious vision can advance our understanding of the functional and neural underpinnings of visual awareness. Here we investigated whether distributed local features can be bound, outside of awareness, into coherent patterns. We used continuous flash suppression (CFS) to create interocular suppression, and thus lack of awareness, for a moving dot stimulus that varied in terms of coherence with an overall pattern (radial flow). Our results demonstrate that for radial motion, coherence favors the detection of patterns of moving dots even under interocular suppression. Coherence caused dots to break through the masks more often: this indicates that the visual system was able to integrate low-level motion signals into a coherent pattern outside of visual awareness. In contrast, in an experiment using meaningful or scrambled biological motion we did not observe any increase in the sensitivity of detection for meaningful patterns. Overall, our results are in agreement with previous studies on face processing and with the hypothesis that certain features are spatiotemporally bound into coherent patterns even outside of attention or awareness.
Systems Imaging of the Immune Synapse.
Ambler, Rachel; Ruan, Xiangtao; Murphy, Robert F; Wülfing, Christoph
2017-01-01
Three-dimensional live cell imaging of the interaction of T cells with antigen-presenting cells (APCs) visualizes the subcellular distributions of signaling intermediates during T cell activation at thousands of resolved positions within a cell. These information-rich maps of local protein concentrations are a valuable resource in understanding T cell signaling. Here, we describe a protocol for the efficient acquisition of such imaging data and their computational processing to create four-dimensional maps of local concentrations. This protocol allows quantitative analysis of T cell signaling as it occurs inside live cells with resolution in time and space across thousands of cells.
2016-01-01
Abstract Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human experience includes many different types of referents, some visual, some auditory, some very abstract. Does the neural basis of reference resolution depend on the nature of the referents, or do our brains use a modality-general mechanism for linking meanings to referents? Here we report evidence for both. Using magnetoencephalography (MEG), we varied both the modality of referents, which consisted either of visual or auditory objects, and the point at which reference resolution was possible within sentences. Source-localized MEG responses revealed brain activity associated with reference resolution that was independent of the modality of the referents, localized to the medial parietal lobe and starting ∼415 ms after the onset of reference resolving words. A modality-specific response to reference resolution in auditory domains was also found, in the vicinity of auditory cortex. Our results suggest that referential language processing cannot be reduced to processing in classical language regions and representations of the referential domain in modality-specific neural systems. Instead, our results suggest that reference resolution engages medial parietal cortex, which supports a mechanism for referential processing regardless of the content modality. PMID:28058272
Stable statistical representations facilitate visual search.
Corbett, Jennifer E; Melcher, David
2014-10-01
Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.
Altschuler, Ted S.; Molholm, Sophie; Butler, John S.; Mercier, Manuel R.; Brandwein, Alice B.; Foxe, John J.
2014-01-01
The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230-400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N= 63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern - engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5 years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. PMID:24365674
Kálmán, Mihály; Oszwald, Erzsébet; Adorján, István
2018-01-01
Dystroglycan has an important role in binding of perivascular glial end-feet to the basal lamina. Its β-subunit is localized in the glial end-feet. The investigation period lasted from E(embryonic day)12 to E20. Laminin and β-dystroglycan were detected by immunohistochemistry, the glial localization of the latter one was supported by electron microscopy. The immature glial structures were visualized by the immunostaining of nestin. The β-dystroglycan immunoreactivity appeared at E16 following the laminin of basal lamina but preceding the perivascular processes of radial glia (E18) and astrocyte-like cells (E20). It occurred in cell bodies which attached to the vessels directly but not with vascular processes and end-feet. The presence of β-dystroglycan in such immature cells may promote their differentiation to perivascular astrocytes and influence the formation of the glio-vascular processes.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Combining MRI and VEP imaging to isolate the temporal response of visual cortical areas
NASA Astrophysics Data System (ADS)
Carney, Thom; Ales, Justin; Klein, Stanley A.
2008-02-01
The human brain has well over 30 cortical areas devoted to visual processing. Classical neuro-anatomical as well as fMRI studies have demonstrated that early visual areas have a retinotopic organization whereby adjacent locations in visual space are represented in adjacent areas of cortex within a visual area. At the 2006 Electronic Imaging meeting we presented a method using sprite graphics to obtain high resolution retinotopic visual evoked potential responses using multi-focal m-sequence technology (mfVEP). We have used this method to record mfVEPs from up to 192 non overlapping checkerboard stimulus patches scaled such that each patch activates about 12 mm2 of cortex in area V1 and even less in V2. This dense coverage enables us to incorporate cortical folding constraints, given by anatomical MRI and fMRI results from the same subject, to isolate the V1 and V2 temporal responses. Moreover, the method offers a simple means of validating the accuracy of the extracted V1 and V2 time functions by comparing the results between left and right hemispheres that have unique folding patterns and are processed independently. Previous VEP studies have been contradictory as to which area responds first to visual stimuli. This new method accurately separates the signals from the two areas and demonstrates that both respond with essentially the same latency. A new method is introduced which describes better ways to isolate cortical areas using an empirically determined forward model. The method includes a novel steady state mfVEP and complex SVD techniques. In addition, this evolving technology is put to use examining how stimulus attributes differentially impact the response in different cortical areas, in particular how fast nonlinear contrast processing occurs. This question is examined using both state triggered kernel estimation (STKE) and m-sequence "conditioned kernels". The analysis indicates different contrast gain control processes in areas V1 and V2. Finally we show that our m-sequence multi-focal stimuli have advantages for integrating EEG and MEG for improved dipole localization.
Global motion perception deficits in autism are reflected as early as primary visual cortex
Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.
2014-01-01
Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095
Growing Shrubs at the George O. White State Forest Nursery: What Has Worked and What Has Not
Gregory Hoss
2006-01-01
At the George O. White State Forest Nursery in Licking, MO, we annually grow about 20 species of shrubs. That number has been larger in some years. For most species, we purchase seeds locally and process them at our nursery. Our shrubs are used for wetland restoration, windbreaks, visual screens, riparian buffers, and wildlife plantings.
Strategic Engagement in Global S&T: Opportunities for Defense Research
2014-01-01
local customization, gaining access to new markets, and placing technical staff close to manufacturing and design centers, but also because the...visit Visual access to research process; can talk to more people about the work Collaboration Designing , carrying out, and analyzing research...Development, and Acquisition DASN(RDT&E) Deputy Assistant Secretary of the Navy for Research, Development, Testing , and Evaluation Chief of Naval Research
ERIC Educational Resources Information Center
Timerman, Anthony P.; Fenrick, Angela M.; Zamis, Thomas M.
2009-01-01
A sequence of exercises for the isolation and characterization of invertase (E.C. 3.1.2.26) from baker's yeast obtained from a local grocery store is outlined. Because the enzyme is colorless, the use of colored markers and the sequence of purification steps are designed to "visualize" the process by which a colorless protein is selectively…
fMRI evidence for areas that process surface gloss in the human visual cortex
Sun, Hua-Chun; Ban, Hiroshi; Di Luca, Massimiliano; Welchman, Andrew E.
2015-01-01
Surface gloss is an important cue to the material properties of objects. Recent progress in the study of macaque’s brain has increased our understating of the areas involved in processing information about gloss, however the homologies with the human brain are not yet fully understood. Here we used human functional magnetic resonance imaging (fMRI) measurements to localize brain areas preferentially responding to glossy objects. We measured cortical activity for thirty-two rendered three-dimensional objects that had either Lambertian or specular surface properties. To control for differences in image structure, we overlaid a grid on the images and scrambled its cells. We found activations related to gloss in the posterior fusiform sulcus (pFs) and in area V3B/KO. Subsequent analysis with Granger causality mapping indicated that V3B/KO processes gloss information differently than pFs. Our results identify a small network of mid-level visual areas whose activity may be important in supporting the perception of surface gloss. PMID:25490434
Engagement of the left extrastriate body area during body-part metaphor comprehension.
Lacey, Simon; Stilla, Randall; Deshpande, Gopikrishna; Zhao, Sinan; Stephens, Careese; McCormick, Kelly; Kemmerer, David; Sathian, K
2017-03-01
Grounded cognition explanations of metaphor comprehension predict activation of sensorimotor cortices relevant to the metaphor's source domain. We tested this prediction for body-part metaphors using functional magnetic resonance imaging while participants heard sentences containing metaphorical or literal references to body parts, and comparable control sentences. Localizer scans identified body-part-specific motor, somatosensory and visual cortical regions. Both subject- and item-wise analyses showed that, relative to control sentences, metaphorical but not literal sentences evoked limb metaphor-specific activity in the left extrastriate body area (EBA), paralleling the EBA's known visual limb-selectivity. The EBA focus exhibited resting-state functional connectivity with ipsilateral semantic processing regions. In some of these regions, the strength of resting-state connectivity correlated with individual preference for verbal processing. Effective connectivity analyses showed that, during metaphor comprehension, activity in some semantic regions drove that in the EBA. These results provide converging evidence for grounding of metaphor processing in domain-specific sensorimotor cortical activity. Published by Elsevier Inc.
Neuro-inspired smart image sensor: analog Hmax implementation
NASA Astrophysics Data System (ADS)
Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman
2015-03-01
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
Linnman, Clas; Appel, Lieuwe; Fredrikson, Mats; Gordh, Torsten; Söderlund, Anne; Långström, Bengt; Engler, Henry
2011-01-01
There are few diagnostic tools for chronic musculoskeletal pain as structural imaging methods seldom reveal pathological alterations. This is especially true for Whiplash Associated Disorder, for which physical signs of persistent injuries to the neck have yet to be established. Here, we sought to visualize inflammatory processes in the neck region by means Positron Emission Tomography using the tracer 11C-D-deprenyl, a potential marker for inflammation. Twenty-two patients with enduring pain after a rear impact car accident (Whiplash Associated Disorder grade II) and 14 healthy controls were investigated. Patients displayed significantly elevated tracer uptake in the neck, particularly in regions around the spineous process of the second cervical vertebra. This suggests that whiplash patients have signs of local persistent peripheral tissue inflammation, which may potentially serve as a diagnostic biomarker. The present investigation demonstrates that painful processes in the periphery can be objectively visualized and quantified with PET and that 11C-D-deprenyl is a promising tracer for these purposes. PMID:21541010
Selective weighting of action-related feature dimensions in visual working memory.
Heuer, Anna; Schubö, Anna
2017-08-01
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.
Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.
Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric
2013-01-04
It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.
Bastos, Andre M; Briggs, Farran; Alitto, Henry J; Mangun, George R; Usrey, W Martin
2014-05-28
Oscillatory synchronization of neuronal activity has been proposed as a mechanism to modulate effective connectivity between interacting neuronal populations. In the visual system, oscillations in the gamma-frequency range (30-100 Hz) are thought to subserve corticocortical communication. To test whether a similar mechanism might influence subcortical-cortical communication, we recorded local field potential activity from retinotopically aligned regions in the lateral geniculate nucleus (LGN) and primary visual cortex (V1) of alert macaque monkeys viewing stimuli known to produce strong cortical gamma-band oscillations. As predicted, we found robust gamma-band power in V1. In contrast, visual stimulation did not evoke gamma-band activity in the LGN. Interestingly, an analysis of oscillatory phase synchronization of LGN and V1 activity identified synchronization in the alpha (8-14 Hz) and beta (15-30 Hz) frequency bands. Further analysis of directed connectivity revealed that alpha-band interactions mediated corticogeniculate feedback processing, whereas beta-band interactions mediated geniculocortical feedforward processing. These results demonstrate that although the LGN and V1 display functional interactions in the lower frequency bands, gamma-band activity in the alert monkey is largely an emergent property of cortex. Copyright © 2014 the authors 0270-6474/14/347639-06$15.00/0.
Mobile device geo-localization and object visualization in sensor networks
NASA Astrophysics Data System (ADS)
Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael
2014-10-01
In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.
Dillen, Claudia; Steyaert, Jean; Op de Beeck, Hans P; Boets, Bart
2015-05-01
The embedded figures test has often been used to reveal weak central coherence in individuals with autism spectrum disorder (ASD). Here, we administered a more standardized automated version of the embedded figures test in combination with the configural superiority task, to investigate the effect of contextual modulation on local feature detection in 23 adolescents with ASD and 26 matched typically developing controls. On both tasks both groups performed largely similarly in terms of accuracy and reaction time, and both displayed the contextual modulation effect. This indicates that individuals with ASD are equally sensitive compared to typically developing individuals to the contextual effects of the task and that there is no evidence for a local processing bias in adolescents with ASD.
Human motion tracking by temporal-spatial local gaussian process experts.
Zhao, Xu; Fu, Yun; Liu, Yuncai
2011-04-01
Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models.
Knowledge Co-production Strategies for Water Resources Modeling and Decision Making
NASA Astrophysics Data System (ADS)
Gober, P.
2016-12-01
The limited impact of scientific information on policy making and climate adaptation in North America has raised awareness of the need for new modeling strategies and knowledge transfer processes. This paper outlines the rationale for a new paradigm in water resources modeling and management, using examples from the USA and Canada. Principles include anticipatory modeling, complex system dynamics, decision making under uncertainty, visualization, capacity to represent and manipulate critical trade-offs, stakeholder engagement, local knowledge, context-specific activities, social learning, vulnerability analysis, iterative and collaborative modeling, and the concept of a boundary organization. In this framework, scientists and stakeholders are partners in the production and dissemination of knowledge for decision making, and local knowledge is fused with scientific observation and methodology. Discussion draws from experience in building long-term collaborative boundary organizations in Phoenix, Arizona in the USA and the Saskatchewan River Basin (SRB) in Canada. Examples of boundary spanning activities include the use of visualization, the concept of a decision theater, infrastructure to support social learning, social networks, and reciprocity, simulation modeling to explore "what if" scenarios of the future, surveys to elicit how water problems are framed by scientists and stakeholders, and humanistic activities (theatrical performances, art exhibitions, etc.) to draw attention to local water issues. The social processes surrounding model development and dissemination are at least as important as modeling assumptions, procedures, and results in determining whether scientific knowledge will be used effectively for water resources decision making.
Evidence for global processing of complex visual displays
NASA Technical Reports Server (NTRS)
Munson, Robert C.; Horst, Richard L.
1986-01-01
'Polar graphic' displays, in which changes in system status are represented by distortions in the form of a geometric figure, were presented to subjects, and reaction time (RT) to discriminate system status was recorded. Of interest was the extent to which reaction time showed evidence of global processing of these displays as the number of nodes and difficulty of discrimination were varied. When discrimination of system status was easy, RT showed no increase with increasing number of nodes, providing evidence of global processing. When discrimination was difficult, systematic differences in RT as a function of the number of nodes suggested the invocation of other (local) processes, although the data were not consistent with a node-by-node search process.
Yang, Zhiyong; Heeger, David J.; Blake, Randolph
2014-01-01
Traveling waves of cortical activity, in which local stimulation triggers lateral spread of activity to distal locations, have been hypothesized to play an important role in cortical function. However, there is conflicting physiological evidence for the existence of spreading traveling waves of neural activity triggered locally. Dichoptic stimulation, in which the two eyes view dissimilar monocular patterns, can lead to dynamic wave-like fluctuations in visual perception and therefore, provides a promising means for identifying and studying cortical traveling waves. Here, we used voltage-sensitive dye imaging to test for the existence of traveling waves of activity in the primary visual cortex of awake, fixating monkeys viewing dichoptic stimuli. We find clear traveling waves that are initiated by brief, localized contrast increments in one of the monocular patterns and then, propagate at speeds of ∼30 mm/s. These results demonstrate that under an appropriate visual context, circuitry in visual cortex in alert animals is capable of supporting long-range traveling waves triggered by local stimulation. PMID:25343785
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Doesburg, Sam M; Herdman, Anthony T; Ribary, Urs; Cheung, Teresa; Moiseev, Alexander; Weinberg, Hal; Liotti, Mario; Weeks, Daniel; Grunau, Ruth E
2010-04-01
Local alpha-band synchronization has been associated with both cortical idling and active inhibition. Recent evidence, however, suggests that long-range alpha synchronization increases functional coupling between cortical regions. We demonstrate increased long-range alpha and beta band phase synchronization during short-term memory retention in children 6-10 years of age. Furthermore, whereas alpha-band synchronization between posterior cortex and other regions is increased during retention, local alpha-band synchronization over posterior cortex is reduced. This constitutes a functional dissociation for alpha synchronization across local and long-range cortical scales. We interpret long-range synchronization as reflecting functional integration within a network of frontal and visual cortical regions. Local desynchronization of alpha rhythms over posterior cortex, conversely, likely arises because of increased engagement of visual cortex during retention.
NASA Astrophysics Data System (ADS)
Ranjeva, Minna; Thompson, Lee; Perlitz, Daniel; Bonness, William; Capone, Dean; Elbing, Brian
2011-11-01
Cavitation is a major concern for the US Navy since it can cause ship damage and produce unwanted noise. The ability to precisely locate cavitation onset in laboratory scale experiments is essential for proper design that will minimize this undesired phenomenon. Measuring the cavitation onset is more accurately determined acoustically than visually. However, if other parts of the model begin to cavitate prior to the component of interest the acoustic data is contaminated with spurious noise. Consequently, cavitation onset is widely determined by optically locating the event of interest. The current research effort aims at developing an acoustic localization scheme for reverberant environments such as water tunnels. Currently cavitation bubbles are being induced in a static water tank with a laser, allowing the localization techniques to be refined with the bubble at a known location. The source is located with the use of acoustic data collected with hydrophones and analyzed using signal processing techniques. To verify the accuracy of the acoustic scheme, the events are simultaneously monitored visually with the use of a high speed camera. Once refined testing will be conducted in a water tunnel. This research was sponsored by the Naval Engineering Education Center (NEEC).
Visual representation of spatiotemporal structure
NASA Astrophysics Data System (ADS)
Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.
1998-07-01
The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.
Interactions between motion and form processing in the human visual system.
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Interactions between motion and form processing in the human visual system
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286
Zugaj, D; Chenet, A; Petit, L; Vaglio, J; Pascual, T; Piketty, C; Bourdes, V
2018-02-04
Currently, imaging technologies that can accurately assess or provide surrogate markers of the human cutaneous microvessel network are limited. Dynamic optical coherence tomography (D-OCT) allows the detection of blood flow in vivo and visualization of the skin microvasculature. However, image processing is necessary to correct images, filter artifacts, and exclude irrelevant signals. The objective of this study was to develop a novel image processing workflow to enhance the technical capabilities of D-OCT. Single-center, vehicle-controlled study including healthy volunteers aged 18-50 years. A capsaicin solution was applied topically on the subject's forearm to induce local inflammation. Measurements of capsaicin-induced increase in dermal blood flow, within the region of interest, were performed by laser Doppler imaging (LDI) (reference method) and D-OCT. Sixteen subjects were enrolled. A good correlation was shown between D-OCT and LDI, using the image processing workflow. Therefore, D-OCT offers an easy-to-use alternative to LDI, with good repeatability, new robust morphological features (dermal-epidermal junction localization), and quantification of the distribution of vessel size and changes in this distribution induced by capsaicin. The visualization of the vessel network was improved through bloc filtering and artifact removal. Moreover, the assessment of vessel size distribution allows a fine analysis of the vascular patterns. The newly developed image processing workflow enhances the technical capabilities of D-OCT for the accurate detection and characterization of microcirculation in the skin. A direct clinical application of this image processing workflow is the quantification of the effect of topical treatment on skin vascularization. © 2018 The Authors. Skin Research and Technology Published by John Wiley & Sons Ltd.
Milz, Patricia; Pascual-Marqui, Roberto D; Lehmann, Dietrich; Faber, Pascal L
2016-05-01
Functional states of the brain are constituted by the temporally attuned activity of spatially distributed neural networks. Such networks can be identified by independent component analysis (ICA) applied to frequency-dependent source-localized EEG data. This methodology allows the identification of networks at high temporal resolution in frequency bands of established location-specific physiological functions. EEG measurements are sensitive to neural activity changes in cortical areas of modality-specific processing. We tested effects of modality-specific processing on functional brain networks. Phasic modality-specific processing was induced via tasks (state effects) and tonic processing was assessed via modality-specific person parameters (trait effects). Modality-specific person parameters and 64-channel EEG were obtained from 70 male, right-handed students. Person parameters were obtained using cognitive style questionnaires, cognitive tests, and thinking modality self-reports. EEG was recorded during four conditions: spatial visualization, object visualization, verbalization, and resting. Twelve cross-frequency networks were extracted from source-localized EEG across six frequency bands using ICA. RMANOVAs, Pearson correlations, and path modelling examined effects of tasks and person parameters on networks. Results identified distinct state- and trait-dependent functional networks. State-dependent networks were characterized by decreased, trait-dependent networks by increased alpha activity in sub-regions of modality-specific pathways. Pathways of competing modalities showed opposing alpha changes. State- and trait-dependent alpha were associated with inhibitory and automated processing, respectively. Antagonistic alpha modulations in areas of competing modalities likely prevent intruding effects of modality-irrelevant processing. Considerable research suggested alpha modulations related to modality-specific states and traits. This study identified the distinct electrophysiological cortical frequency-dependent networks within which they operate.
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception.
Berger, Christopher C; Ehrsson, H Henrik
2018-04-01
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect-a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli-is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Boott, Charlotte E.; Laine, Romain F.; Mahou, Pierre; Finnegan, John R.; Leitao, Erin M.
2015-01-01
Abstract Analytical methods that enable visualization of nanomaterials derived from solution self‐assembly processes in organic solvents are highly desirable. Herein, we demonstrate the use of stimulated emission depletion microscopy (STED) and single molecule localization microscopy (SMLM) to map living crystallization‐driven block copolymer (BCP) self‐assembly in organic media at the sub‐diffraction scale. Four different dyes were successfully used for single‐colour super‐resolution imaging of the BCP nanostructures allowing micelle length distributions to be determined in situ. Dual‐colour SMLM imaging was used to measure and compare the rate of addition of red fluorescent BCP to the termini of green fluorescent seed micelles to generate block comicelles. Although well‐established for aqueous systems, the results highlight the potential of super‐resolution microscopy techniques for the interrogation of self‐assembly processes in organic media. PMID:26477697
An ERP study of recognition memory for concrete and abstract pictures in school-aged children
Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J.; Jacobson, Sandra W.; Jacobson, Joseph L.
2016-01-01
Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N = 96; mean age = 11.3 years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as “new” or “old” (i.e., previously displayed) compared to abstract pictures. ERPs were characterised by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependant on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. PMID:27329352
Rauscher, Franziska G; Plant, Gordon T; James-Galton, Merle; Barbur, John L
2011-01-01
Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d'Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength ("red") and middle wavelength ("green") regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient's results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both "red/green" and "yellow/blue" directions in colour space, the subject's lower left quadrant showed a marked asymmetry in "red/green" thresholds with the greatest loss of sensitivity towards the "green" region of the spectrum locus. This spatially localized asymmetric loss of "green" but not "red" sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent.
Chapter 16: Lignin Visualization: Advanced Microscopy Techniques for Lignin Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Yining; Donohoe, Bryon S
Visualization of lignin in plant cell walls, with both spatial and chemical resolution, is emerging as an important tool to understand lignin's role in the plant cell wall's nanoscale architecture and to understand and design processes intended to modify the lignin. As such, this chapter reviews recent advances in advanced imaging methods with respect to lignin in plant cell walls. This review focuses on the importance of lignin detection and localization for studies in both plant biology and biotechnology. Challenges going forward to identify and delineate lignin from other plant cell wall components and to quantitatively analyze lignin in wholemore » cell walls from native plant tissue and treated biomass are also discussed.« less
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
NASA Astrophysics Data System (ADS)
Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.
2016-06-01
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
A Novel Interhemispheric Interaction: Modulation of Neuronal Cooperativity in the Visual Areas
Carmeli, Cristian; Lopez-Aguado, Laura; Schmidt, Kerstin E.; De Feo, Oscar; Innocenti, Giorgio M.
2007-01-01
Background The cortical representation of the visual field is split along the vertical midline, with the left and the right hemi-fields projecting to separate hemispheres. Connections between the visual areas of the two hemispheres are abundant near the representation of the visual midline. It was suggested that they re-establish the functional continuity of the visual field by controlling the dynamics of the responses in the two hemispheres. Methods/Principal Findings To understand if and how the interactions between the two hemispheres participate in processing visual stimuli, the synchronization of responses to identical or different moving gratings in the two hemi-fields were studied in anesthetized ferrets. The responses were recorded by multiple electrodes in the primary visual areas and the synchronization of local field potentials across the electrodes were analyzed with a recent method derived from dynamical system theory. Inactivating the visual areas of one hemisphere modulated the synchronization of the stimulus-driven activity in the other hemisphere. The modulation was stimulus-specific and was consistent with the fine morphology of callosal axons in particular with the spatio-temporal pattern of activity that axonal geometry can generate. Conclusions/Significance These findings describe a new kind of interaction between the cerebral hemispheres and highlight the role of axonal geometry in modulating aspects of cortical dynamics responsible for stimulus detection and/or categorization. PMID:18074012
Where is your shoulder? Neural correlates of localizing others' body parts.
Felician, Olivier; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Roll, Jean-Pierre; Romaiguère, Patricia
2009-07-01
Neuropsychological studies, based on pointing to body parts paradigms, suggest that left posterior parietal lobe is involved in the visual processing of other persons' bodies. In addition, some patients have been found with mild deficit when dealing with abstract human representations but marked impairment with realistically represented bodies, suggesting that this processing could be modulated by the abstraction level of the body to be analyzed. These issues were examined in the present fMRI experiment, designed to evaluate the effects of visually processing human bodies of different abstraction levels on brain activity. The human specificity of the studied processes was assessed using whole-body representations of humans and of dogs, while the effects of the abstraction level of the representation were assessed using drawings, photographs, and videos. To assess the effect of species and stimulus complexity on BOLD signal, we performed a two-way ANOVA with factors species (human versus animal) and stimulus complexity (drawings, photographs and videos). When pointing to body parts irrespective of the stimulus complexity, we observed a positive effect of humans upon animals in the left angular gyrus (BA 39), as suggested by lesion studies. This effect was also present in midline cortical structures including mesial prefrontal, anterior cingulate and precuneal regions. When pointing to body parts irrespective of the species to be processed, we observed a positive effect of videos upon photographs and drawings in the right superior parietal lobule (BA 7), and bilaterally in the superior temporal sulcus, the supramarginal gyrus (BA 40) and the lateral extrastriate visual cortex (including the "extrastriate body area"). Taken together, these data suggest that, in comparison with other mammalians, the visual processing of other humans' bodies is associated with left angular gyrus activity, but also with midline structures commonly implicated in self-reference. They also suggest a role of the lateral extrastriate cortex in the processing of dynamic and biologically relevant body representations.
NASA Astrophysics Data System (ADS)
Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.
2015-12-01
A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a local computer, and inter-compare CRM output and data with GCE and NU-WRF.
Linking crowding, visual span, and reading.
He, Yingchen; Legge, Gordon E
2017-09-01
The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading.
Linking crowding, visual span, and reading
He, Yingchen; Legge, Gordon E.
2017-01-01
The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading. PMID:28973564
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Applying Semantics in Dataset Summarization for Solar Data Ingest Pipelines
NASA Astrophysics Data System (ADS)
Michaelis, J.; McGuinness, D. L.; Zednik, S.; West, P.; Fox, P. A.
2012-12-01
One goal in studying phenomena of the solar corona (e.g., flares, coronal mass ejections) is to create and refine predictive models of space weather - which have broad implications for terrestrial activity (e.g., communication grid reliability). The High Altitude Observatory (HAO) [1] presently maintains an infrastructure for generating time-series visualizations of the solar corona. Through raw data gathered at the Mauna Loa Solar Observatory (MLSO) in Hawaii, HAO performs follow-up processing and quality control steps to derive visualization sets consumable by scientists. Individual visualizations will acquire several properties during their derivation, including: (i) the source instrument at MLSO used to obtain the raw data, (ii) the time the data was gathered, (iii) processing steps applied by HAO to generate the visualization, and (iv) quality metrics applied over both the raw and processed data. In parallel to MLSO's standard data gathering, time stamped observation logs are maintained by MLSO staff, which covers content of potential relevance to data gathered (such as local weather and instrument conditions). In this setting, while a significant amount of solar data is gathered, only small sections will typically be of interest to consuming parties. Additionally, direct presentation of solar data collections could overwhelm consumers (particularly those with limited background in the data structuring). This work explores how multidimensional analysis based navigation can be used to generate summary views of data collections, based on two operations: (i) grouping visualization entries based on similarity metrics (e.g., data gathered between 23:15-23:30 6-21-2012), or (ii) filtering entries (e.g., data with a quality score of UGLY, on a scale of GOOD, BAD, or UGLY). Here, semantic encodings of solar visualization collections (based on the Resource Description Framework (RDF) Datacube vocabulary [2]) are being utilized, based on the flexibility of the RDF model for supporting the following use cases: (i) Temporal alignment of time-stamped MLSO observations with raw data gathered at MLSO. (ii) Linking of multiple visualization entries to common (and structurally complex) workflow structures - designed to capture the visualization generation process. To provide real-world use cases for the described approach, a semantic summarization system is being developed for data gathered from HAO's Coronal Multi-channel Polarimeter (CoMP) and Chromospheric Helium-I Imaging Photometer (CHIP) pipelines. Web Links: [1] http://mlso.hao.ucar.edu/ [2] http://www.w3.org/TR/vocab-data-cube/
Rojas-Líbano, Daniel; Wimmer Del Solar, Jonathan; Aguilar-Rivera, Marcelo; Montefusco-Siegmund, Rodrigo; Maldonado, Pedro Esteban
2018-05-16
An important unresolved question about neural processing is the mechanism by which distant brain areas coordinate their activities and relate their local processing to global neural events. A potential candidate for the local-global integration are slow rhythms such as respiration. In this article, we asked if there are modulations of local cortical processing which are phase-locked to (peripheral) sensory-motor exploratory rhythms. We studied rats on an elevated platform where they would spontaneously display exploratory and rest behaviors. Concurrent with behavior, we monitored whisking through EMG and the respiratory rhythm from the olfactory bulb (OB) local field potential (LFP). We also recorded LFPs from dorsal hippocampus, primary motor cortex, primary somatosensory cortex and primary visual cortex. We defined exploration as simultaneous whisking and sniffing above 5 Hz and found that this activity peaked at about 8 Hz. We considered rest as the absence of whisking and sniffing, and in this case, respiration occurred at about 3 Hz. We found a consistent shift across all areas toward these rhythm peaks accompanying behavioral changes. We also found, across areas, that LFP gamma (70-100 Hz) amplitude could phase-lock to the animal's OB respiratory rhythm, a finding indicative of respiration-locked changes in local processing. In a subset of animals, we also recorded the hippocampal theta activity and found that occurred at frequencies overlapped with respiration but was not spectrally coherent with it, suggesting a different oscillator. Our results are consistent with the notion of respiration as a binder or integrator of activity between brain regions.
Sowpati, Divya Tej; Srivastava, Surabhi; Dhawan, Jyotsna; Mishra, Rakesh K
2017-09-13
Comparative epigenomic analysis across multiple genes presents a bottleneck for bench biologists working with NGS data. Despite the development of standardized peak analysis algorithms, the identification of novel epigenetic patterns and their visualization across gene subsets remains a challenge. We developed a fast and interactive web app, C-State (Chromatin-State), to query and plot chromatin landscapes across multiple loci and cell types. C-State has an interactive, JavaScript-based graphical user interface and runs locally in modern web browsers that are pre-installed on all computers, thus eliminating the need for cumbersome data transfer, pre-processing and prior programming knowledge. C-State is unique in its ability to extract and analyze multi-gene epigenetic information. It allows for powerful GUI-based pattern searching and visualization. We include a case study to demonstrate its potential for identifying user-defined epigenetic trends in context of gene expression profiles.
Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics
Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.
2011-01-01
Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976
Gao, Peng Fei; Yuan, Bin Fang; Gao, Ming Xuan; Li, Rong Sheng; Ma, Jun; Zou, Hong Yan; Li, Yuan Fang; Li, Ming; Huang, Cheng Zhi
2015-01-01
Insight into the nature of metal-sulfur bond, a meaningful one in life science, interface chemistry and organometallic chemistry, is interesting but challenging. By utilizing the localized surface plasmon resonance properties of silver nanoparticles, herein we visually identified the photosensitivity of silver-dithiocarbamate (Ag-DTC) bond by using dark field microscopic imaging (iDFM) technique at single nanoparticle level. It was found that the breakage of Ag-DTC bond could be accelerated effectively by light irradiation, followed by a pH-dependent horizontal or vertical degradation of the DTC molecules, in which an indispensable preoxidation process of the silver was at first disclosed. These findings suggest a visualization strategy at single plasmonic nanoparticle level which can be excellently applied to explore new stimulus-triggered reactions, and might also open a new way to understand traditional organic reaction mechanisms. PMID:26493773
Spectral properties of the temporal evolution of brain network structure.
Wang, Rong; Zhang, Zhen-Zhen; Ma, Jun; Yang, Yong; Lin, Pan; Wu, Ying
2015-12-01
The temporal evolution properties of the brain network are crucial for complex brain processes. In this paper, we investigate the differences in the dynamic brain network during resting and visual stimulation states in a task-positive subnetwork, task-negative subnetwork, and whole-brain network. The dynamic brain network is first constructed from human functional magnetic resonance imaging data based on the sliding window method, and then the eigenvalues corresponding to the network are calculated. We use eigenvalue analysis to analyze the global properties of eigenvalues and the random matrix theory (RMT) method to measure the local properties. For global properties, the shifting of the eigenvalue distribution and the decrease in the largest eigenvalue are linked to visual stimulation in all networks. For local properties, the short-range correlation in eigenvalues as measured by the nearest neighbor spacing distribution is not always sensitive to visual stimulation. However, the long-range correlation in eigenvalues as evaluated by spectral rigidity and number variance not only predicts the universal behavior of the dynamic brain network but also suggests non-consistent changes in different networks. These results demonstrate that the dynamic brain network is more random for the task-positive subnetwork and whole-brain network under visual stimulation but is more regular for the task-negative subnetwork. Our findings provide deeper insight into the importance of spectral properties in the functional brain network, especially the incomparable role of RMT in revealing the intrinsic properties of complex systems.
Spectral properties of the temporal evolution of brain network structure
NASA Astrophysics Data System (ADS)
Wang, Rong; Zhang, Zhen-Zhen; Ma, Jun; Yang, Yong; Lin, Pan; Wu, Ying
2015-12-01
The temporal evolution properties of the brain network are crucial for complex brain processes. In this paper, we investigate the differences in the dynamic brain network during resting and visual stimulation states in a task-positive subnetwork, task-negative subnetwork, and whole-brain network. The dynamic brain network is first constructed from human functional magnetic resonance imaging data based on the sliding window method, and then the eigenvalues corresponding to the network are calculated. We use eigenvalue analysis to analyze the global properties of eigenvalues and the random matrix theory (RMT) method to measure the local properties. For global properties, the shifting of the eigenvalue distribution and the decrease in the largest eigenvalue are linked to visual stimulation in all networks. For local properties, the short-range correlation in eigenvalues as measured by the nearest neighbor spacing distribution is not always sensitive to visual stimulation. However, the long-range correlation in eigenvalues as evaluated by spectral rigidity and number variance not only predicts the universal behavior of the dynamic brain network but also suggests non-consistent changes in different networks. These results demonstrate that the dynamic brain network is more random for the task-positive subnetwork and whole-brain network under visual stimulation but is more regular for the task-negative subnetwork. Our findings provide deeper insight into the importance of spectral properties in the functional brain network, especially the incomparable role of RMT in revealing the intrinsic properties of complex systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokawa, Satoru; School of Pharmacy, Aichi Gakuin University, Nagoya 464-8650; Suzuki, Takahiro
We have firstly visualized glucagon secretion using a method of video-rate bioluminescence imaging. The fusion protein of proglucagon and Gaussia luciferase (PGCG-GLase) was used as a reporter to detect glucagon secretion and was efficiently expressed in mouse pancreatic α cells (αTC1.6) using a preferred human codon-optimized gene. In the culture medium of the cells expressing PGCG-GLase, luminescence activity determined with a luminometer was increased with low glucose stimulation and KCl-induced depolarization, as observed for glucagon secretion. From immunochemical analyses, PGCG-GLase stably expressed in clonal αTC1.6 cells was correctly processed and released by secretory granules. Luminescence signals of the secreted PGCG-GLase frommore » the stable cells were visualized by video-rate bioluminescence microscopy. The video images showed an increase in glucagon secretion from clustered cells in response to stimulation by KCl. The secretory events were observed frequently at the intercellular contact regions. Thus, the localization and frequency of glucagon secretion might be regulated by cell-cell adhesion. - Highlights: • The fused protein of proglucagon to Gaussia luciferase was used as a reporter. • The fusion protein was highly expressed using a preferred human-codon optimized gene. • Glucagon secretion stimulated by depolarization was determined by luminescence. • Glucagon secretion in α cells was visualized by bioluminescence imaging. • Glucagon secretion sites were localized in the intercellular contact regions.« less
A Motion Detection Algorithm Using Local Phase Information
Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
Binding of motion and colour is early and automatic.
Blaser, Erik; Papathomas, Thomas; Vidnyánszky, Zoltán
2005-04-01
At what stages of the human visual hierarchy different features are bound together, and whether this binding requires attention, is still highly debated. We used a colour-contingent motion after-effect (CCMAE) to study the binding of colour and motion signals. The logic of our approach was as follows: if CCMAEs can be evoked by targeted adaptation of early motion processing stages, without allowing for feedback from higher motion integration stages, then this would support our hypothesis that colour and motion are bound automatically on the basis of spatiotemporally local information. Our results show for the first time that CCMAE's can be evoked by adaptation to a locally paired opposite-motion dot display, a stimulus that, importantly, is known to trigger direction-specific responses in the primary visual cortex yet results in strong inhibition of the directional responses in area MT of macaques as well as in area MT+ in humans and, indeed, is perceived only as motionless flicker. The magnitude of the CCMAE in the locally paired condition was not significantly different from control conditions where the different directions were spatiotemporally separated (i.e. not locally paired) and therefore perceived as two moving fields. These findings provide evidence that adaptation at an early, local motion stage, and only adaptation at this stage, underlies this CCMAE, which in turn implies that spatiotemporally coincident colour and motion signals are bound automatically, most probably as early as cortical area V1, even when the association between colour and motion is perceptually inaccessible.
A perceptual space of local image statistics.
Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M
2015-12-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.
A perceptual space of local image statistics
Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.
2015-01-01
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606
González-Hernández, J A; Pita-Alcorta, C; Padrón, A; Finalé, A; Galán, L; Martínez, E; Díaz-Comas, L; Samper-González, J A; Lencer, R; Marot, M
2014-10-01
Basic visual dysfunctions are commonly reported in schizophrenia; however their value as diagnostic tools remains uncertain. This study reports a novel electrophysiological approach using checkerboard visual evoked potentials (VEP). Sources of spectral resolution VEP-components C1, P1 and N1 were estimated by LORETA, and the band-effects (BSE) on these estimated sources were explored in each subject. BSEs were Z-transformed for each component and relationships with clinical variables were assessed. Clinical effects were evaluated by ROC-curves and predictive values. Forty-eight patients with schizophrenia (SZ) and 55 healthy controls participated in the study. For each of the 48 patients, the three VEP components were localized to both dorsal and ventral brain areas and also deviated from a normal distribution. P1 and N1 deviations were independent of treatment, illness chronicity or gender. Results from LORETA also suggest that deficits in thalamus, posterior cingulum, precuneus, superior parietal and medial occipitotemporal areas were associated with symptom severity. While positive symptoms were more strongly related to sensory processing deficits (P1), negative symptoms were more strongly related to perceptual processing dysfunction (N1). Clinical validation revealed positive and negative predictive values for correctly classifying SZ of 100% and 77%, respectively. Classification in an additional independent sample of 30 SZ corroborated these results. In summary, this novel approach revealed basic visual dysfunctions in all patients with schizophrenia, suggesting these visual dysfunctions represent a promising candidate as a biomarker for schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schauppenlehner, Thomas; Salak, Boris; Scherhaufer, Patrick; Höltinger, Stefan; Schmidt, Johannes
2017-04-01
Due to efficiency reasons and broadly availability of wind, wind energy is in focus of strategies regarding the expansion of renewable energy and energy transition policies. Nevertheless, the dimensions of the wind turbines and rotating dynamics have a significant impact on the landscape scenery and recreation as well as tourism activities. This often leads to local opposition against wind energy projects and is a major criterion regarding the acceptance of wind energy. In the project TransWind, the social acceptance of wind energy is surveyed on the basis of different development scenarios for Austria. Therefore, a GIS-based viewshed indicator was developed to assess the visual impact of different development scenarios as well as the current situation using weighted - regarding distance, amount and masking - viewshed analysis. This weighted viewshed maps for Austria allows a comprehensive evaluation of existing and potential wind energy sites regarding dominance and visual impact and can contribute to the spatial development process of wind energy site. Different regions can be compared and repowering strategies can be evaluated. Due to the large project area, data resolutions, generalized assumptions (e.g. tree heights) and missing data (e.g. solitary trees, small hedges) at local level further analysis are necessary but it supports the assessment of large-scale development scenarios can be identified.
Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai
2013-05-01
Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.
A role for the membrane protein M6 in the Drosophila visual system.
Zappia, María Paula; Bernabo, Guillermo; Billi, Silvia C; Frasch, Alberto C; Ceriani, María Fernanda; Brocco, Marcela Adriana
2012-07-04
Members of the proteolipid protein family, including the four-transmembrane glycoprotein M6a, are involved in neuronal plasticity in mammals. Results from our group previously demonstrated that M6, the only proteolipid protein expressed in Drosophila, localizes to the cell membrane in follicle cells. M6 loss triggers female sterility, which suggests a role for M6 in follicular cell remodeling. These results were the basis of the present study, which focused on the function and requirements of M6 in the fly nervous system. The present study identified two novel, tissue-regulated M6 isoforms with variable N- and C- termini, and showed that M6 is the functional fly ortholog of Gpm6a. In the adult brain, the protein was localized to several neuropils, such as the optic lobe, the central complex, and the mushroom bodies. Interestingly, although reduced M6 levels triggered a mild rough-eye phenotype, hypomorphic M6 mutants exhibited a defective response to light. Based on its ability to induce filopodium formation we propose that M6 is key in cell remodeling processes underlying visual system function. These results bring further insight into the role of M6/M6a in biological processes involving neuronal plasticity and behavior in flies and mammals.
A role for the membrane protein M6 in the Drosophila visual system
2012-01-01
Background Members of the proteolipid protein family, including the four-transmembrane glycoprotein M6a, are involved in neuronal plasticity in mammals. Results from our group previously demonstrated that M6, the only proteolipid protein expressed in Drosophila, localizes to the cell membrane in follicle cells. M6 loss triggers female sterility, which suggests a role for M6 in follicular cell remodeling. These results were the basis of the present study, which focused on the function and requirements of M6 in the fly nervous system. Results The present study identified two novel, tissue-regulated M6 isoforms with variable N- and C- termini, and showed that M6 is the functional fly ortholog of Gpm6a. In the adult brain, the protein was localized to several neuropils, such as the optic lobe, the central complex, and the mushroom bodies. Interestingly, although reduced M6 levels triggered a mild rough-eye phenotype, hypomorphic M6 mutants exhibited a defective response to light. Conclusions Based on its ability to induce filopodium formation we propose that M6 is key in cell remodeling processes underlying visual system function. These results bring further insight into the role of M6/M6a in biological processes involving neuronal plasticity and behavior in flies and mammals. PMID:22762289
Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2012-05-04
Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phasemore » utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.« less
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert
2010-01-01
Background The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. Methodology/Principal Findings The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. Conclusions/Significance The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing. PMID:20967247
Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio
2015-02-19
Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.
Mercier, Manuel R; Schwartz, Sophie; Spinelli, Laurent; Michel, Christoph M; Blanke, Olaf
2017-03-01
The main model of visual processing in primates proposes an anatomo-functional distinction between the dorsal stream, specialized in spatio-temporal information, and the ventral stream, processing essentially form information. However, these two pathways also communicate to share much visual information. These dorso-ventral interactions have been studied using form-from-motion (FfM) stimuli, revealing that FfM perception first activates dorsal regions (e.g., MT+/V5), followed by successive activations of ventral regions (e.g., LOC). However, relatively little is known about the implications of focal brain damage of visual areas on these dorso-ventral interactions. In the present case report, we investigated the dynamics of dorsal and ventral activations related to FfM perception (using topographical ERP analysis and electrical source imaging) in a patient suffering from a deficit in FfM perception due to right extrastriate brain damage in the ventral stream. Despite the patient's FfM impairment, both successful (observed for the highest level of FfM signal) and absent/failed FfM perception evoked the same temporal sequence of three processing states observed previously in healthy subjects. During the first period, brain source localization revealed cortical activations along the dorsal stream, currently associated with preserved elementary motion processing. During the latter two periods, the patterns of activity differed from normal subjects: activations were observed in the ventral stream (as reported for normal subjects), but also in the dorsal pathway, with the strongest and most sustained activity localized in the parieto-occipital regions. On the other hand, absent/failed FfM perception was characterized by weaker brain activity, restricted to the more lateral regions. This study shows that in the present case report, successful FfM perception, while following the same temporal sequence of processing steps as in normal subjects, evoked different patterns of brain activity. By revealing a brain circuit involving the most rostral part of the dorsal pathway, this study provides further support for neuro-imaging studies and brain lesion investigations that have suggested the existence of different brain circuits associated with different profiles of interaction between the dorsal and the ventral streams.
CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.
Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J
2015-01-01
CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.
Visualizing value for money in public health interventions.
Leigh-Hunt, Nicholas; Cooper, Duncan; Furber, Andrew; Bevan, Gwyn; Gray, Muir
2018-01-23
The Socio-Technical Allocation of Resources (STAR) has been developed for value for money analysis of health services through stakeholder workshops. This article reports on its application for prioritization of interventions within public health programmes. The STAR tool was used by identifying costs and service activity for interventions within commissioned public health programmes, with benefits estimated from the literature on economic evaluations in terms of costs per Quality-Adjusted Life Years (QALYs); consensus on how these QALY values applied to local services was obtained with local commissioners. Local cost-effectiveness estimates could be made for some interventions. Methodological issues arose from gaps in the evidence base for other interventions, inability to closely match some performance monitoring data with interventions, and disparate time horizons of published QALY data. Practical adjustment for these issues included using population prevalences and utility states where intervention specific evidence was lacking, and subdivision of large contracts into specific intervention costs using staffing ratios. The STAR approach proved useful in informing commissioning decisions and understanding the relative value of local public health interventions. Further work is needed to improve robustness of the process and develop a visualization tool for use by public health departments. © The Author(s) 2018. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.
2012-01-01
It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535
Lisicki, Marco; D'Ostilio, Kevin; Erpicum, Michel; Schoenen, Jean; Magis, Delphine
2017-01-01
Background Migraine is a complex multifactorial disease that arises from the interaction between a genetic predisposition and an enabling environment. Habituation is considered as a fundamental adaptive behaviour of the nervous system that is often impaired in migraine populations. Given that migraineurs are hypersensitive to light, and that light deprivation is able to induce functional changes in the visual cortex recognizable through visual evoked potentials habituation testing, we hypothesized that regional sunlight irradiance levels could influence the results of visual evoked potentials habituation studies performed in different locations worldwide. Methods We searched the literature for visual evoked potentials habituation studies comparing healthy volunteers and episodic migraine patients and correlated their results with levels of local solar radiation. Results After reviewing the literature, 26 studies involving 1291 participants matched our inclusion criteria. Deficient visual evoked potentials habituation in episodic migraine patients was reported in 19 studies. Mean yearly sunlight irradiance was significantly higher in locations of studies reporting deficient habituation. Correlation analyses suggested that visual evoked potentials habituation decreases with increasing sunlight irradiance in migraine without aura patients. Conclusion Results from this hypothesis generating analysis suggest that variations in sunlight irradiance may induce adaptive modifications in visual processing systems that could be reflected in visual evoked potentials habituation, and thus partially account for the difference in results between studies performed in geographically distant centers. Other causal factors such as genetic differences could also play a role, and therefore well-designed prospective trials are warranted.
Image processing and 3D visualization in the interpretation of patterned injury of the skin
NASA Astrophysics Data System (ADS)
Oliver, William R.; Altschuler, Bruce R.
1995-09-01
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
Meaux, Emilie; Vuilleumier, Patrik
2016-11-01
The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by reciprocal interactions and modulated by task demands. Copyright © 2016 Elsevier Inc. All rights reserved.
Meyer, P.D.; Greenlee, Susan K.; Gesch, Dean B.; Hubl, Erik J.; Axmann, Ryan N.
2005-01-01
The Lincoln Lidar Project was a partnership developed between the U.S. Geological Survey National Center for Earth Resources Observations and Science (EROS), Lancaster County and the city of Lincoln, Nebraska. This project demonstrated a successful planning, collection, analysis and integration of high-resolution elevation information using Light Detection and Ranging, (Lidar) data. This report describes the partnership developed to collect local Lidar data and transform the data into information useable at local to national levels. This report specifically describes project planning, quality assurance, processing, transforming raw Lidar points to useable data layers, and visualizing and disseminating the raw and final products.
NASA Astrophysics Data System (ADS)
Wodzinski, Marek; Skalski, Andrzej; Ciepiela, Izabela; Kuszewski, Tomasz; Kedzierawski, Piotr; Gajda, Janusz
2018-02-01
Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem, even though the visual alignment seems to be better than for the Demons algorithm. However, no algorithm could recover the deformation field with sufficient accuracy in terms of vector length and rotation angle differences.
Altschuler, Ted S; Molholm, Sophie; Butler, John S; Mercier, Manuel R; Brandwein, Alice B; Foxe, John J
2014-04-15
The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N=63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent. Copyright © 2013 Elsevier Inc. All rights reserved.
The Role of Visual Processing Speed in Reading Speed Development
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117
The role of visual processing speed in reading speed development.
Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane
2013-01-01
A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.
47 CFR 74.783 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
... originating local programming as defined by § 74.701(h) operating over 0.001 kw peak visual power (0.002 kw... visual presentation or a clearly understandable aural presentation of the translator station's call... identification procedures given in § 73.1201 when locally originating programming, as defined by § 74.701(h). The...
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
2012-01-01
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
NASA Technical Reports Server (NTRS)
Watson, Andrw B. (Inventor)
2010-01-01
The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image. or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image . Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer. SSO. Some embodiments include masking functions. window functions. special treatment for images lying on or near border and pre-processing of test images.
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2012-01-01
The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images.
Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion
Niehorster, Diederick C.
2017-01-01
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272
[Interest of ultrasonographic guidance in paediatric regional anaesthesia].
Dadure, C; Raux, O; Rochette, A; Capdevila, X
2009-10-01
The use of ultrasonographic guidance for regional anaesthesia has known recently a big interest in children in recent years. The linear ultrasound probes with a 25 mm active surface area (or probes with 38 mm active surface area in older children), with high sound frequencies in the range 8-14 MHz, allow a good compromise between excellent resolution for superficial structure and good penetration depths. In children, the easiest ultrasound guided blocks are axillar blocks, femoral blocks, fascia iliaca compartment blocks, ilio-inguinal blocks and para-umbilical blocks, caudal blocks. They permit a safe and easy learning curve of these techniques. The main advantage of ultrasound guided regional anaesthesia is the visualization of different anatomical structures and the approximate localization of the tip of needle. The other advantages for ultrasound guided peripheral nerve blocks in children are: faster onset time of sensory and motor block, longer duration of sensory blockade, increase of blockade quality and reduction of local anesthetic injection. The use of ultrasonographic guidance for central block allows to visualize different structures as well as spine and his content. Spinous process, ligament flavum, dura mater, conus medullaris and cerebrospinal fluid are identifiable, and give some information on spine, epidural space and the depth between epidural space and skin. At last, in caudal block, ultrasounds permit to evaluate the anatomy of caudal epidural space, especially the relation of the sacral hiatus to the dural sac and the search of occult spinal dysraphism. Benefit of this technique is the visualization of targeted nerves or spaces and the spread of injected local anaesthetic.
Lorenz, Susanne; Dessai, Suraje; Forster, Piers M.; Paavola, Jouni
2015-01-01
Visualizations are widely used in the communication of climate projections. However, their effectiveness has rarely been assessed among their target audience. Given recent calls to increase the usability of climate information through the tailoring of climate projections, it is imperative to assess the effectiveness of different visualizations. This paper explores the complexities of tailoring through an online survey conducted with 162 local adaptation practitioners in Germany and the UK. The survey examined respondents’ assessed and perceived comprehension (PC) of visual representations of climate projections as well as preferences for using different visualizations in communicating and planning for a changing climate. Comprehension and use are tested using four different graph formats, which are split into two pairs. Within each pair the information content is the same but is visualized differently. We show that even within a fairly homogeneous user group, such as local adaptation practitioners, there are clear differences in respondents’ comprehension of and preference for visualizations. We do not find a consistent association between assessed comprehension and PC or use within the two pairs of visualizations that we analysed. There is, however, a clear link between PC and use of graph format. This suggests that respondents use what they think they understand the best, rather than what they actually understand the best. These findings highlight that audience-specific targeted communication may be more complex and challenging than previously recognized. PMID:26460109
Imaging mycobacterial growth and division with a fluorogenic probe.
Hodges, Heather L; Brown, Robert A; Crooks, John A; Weibel, Douglas B; Kiessling, Laura L
2018-05-15
Control and manipulation of bacterial populations requires an understanding of the factors that govern growth, division, and antibiotic action. Fluorescent and chemically reactive small molecule probes of cell envelope components can visualize these processes and advance our knowledge of cell envelope biosynthesis (e.g., peptidoglycan production). Still, fundamental gaps remain in our understanding of the spatial and temporal dynamics of cell envelope assembly. Previously described reporters require steps that limit their use to static imaging. Probes that can be used for real-time imaging would advance our understanding of cell envelope construction. To this end, we synthesized a fluorogenic probe that enables continuous live cell imaging in mycobacteria and related genera. This probe reports on the mycolyltransferases that assemble the mycolic acid membrane. This peptidoglycan-anchored bilayer-like assembly functions to protect these cells from antibiotics and host defenses. Our probe, quencher-trehalose-fluorophore (QTF), is an analog of the natural mycolyltransferase substrate. Mycolyltransferases process QTF by diverting their normal transesterification activity to hydrolysis, a process that unleashes fluorescence. QTF enables high contrast continuous imaging and the visualization of mycolyltransferase activity in cells. QTF revealed that mycolyltransferase activity is augmented before cell division and localized to the septa and cell poles, especially at the old pole. This observed localization suggests that mycolyltransferases are components of extracellular cell envelope assemblies, in analogy to the intracellular divisomes and polar elongation complexes. We anticipate QTF can be exploited to detect and monitor mycobacteria in physiologically relevant environments.
Attention modulates visual size adaptation.
Kreutzer, Sylvia; Fink, Gereon R; Weidner, Ralph
2015-01-01
The current study determined in healthy subjects (n = 16) whether size adaptation occurs at early, i.e., preattentive, levels of processing or whether higher cognitive processes such as attention can modulate the illusion. To investigate this issue, bottom-up stimulation was kept constant across conditions by using a single adaptation display containing both small and large adapter stimuli. Subjects' attention was directed to either the large or small adapter stimulus by means of a luminance detection task. When attention was directed toward the small as compared to the large adapter, the perceived size of the subsequent target was significantly increased. Data suggest that different size adaptation effects can be induced by one and the same stimulus depending on the current allocation of attention. This indicates that size adaptation is subject to attentional modulation. These findings are in line with previous research showing that transient as well as sustained attention modulates visual features, such as contrast sensitivity and spatial frequency, and influences adaptation in other contexts, such as motion adaptation (Alais & Blake, 1999; Lankheet & Verstraten, 1995). Based on a recently suggested model (Pooresmaeili, Arrighi, Biagi, & Morrone, 2013), according to which perceptual adaptation is based on local excitation and inhibition in V1, we conclude that guiding attention can boost these local processes in one or the other direction by increasing the weight of the attended adapter. In sum, perceptual adaptation, although reflected in changes of neural activity at early levels (as shown in the aforementioned study), is nevertheless subject to higher-order modulation.
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
Independent and additive repetition priming of motion direction and color in visual search.
Kristjánsson, Arni
2009-03-01
Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.
Effects of symbol type and numerical distance on the human event-related potential.
Jiang, Ting; Qiao, Sibing; Li, Jin; Cao, Zhongyu; Gao, Xuefei; Song, Yan; Xue, Gui; Dong, Qi; Chen, Chuansheng
2010-01-01
This study investigated the influence of the symbol type and numerical distance of numbers on the amplitudes and peak latencies of event-related potentials (ERPs). Our aim was to (1) determine the point in time of magnitude information access in visual number processing; and (2) identify at what stage the advantage of Arabic digits over Chinese verbal numbers occur. ERPs were recorded from 64 scalp sites while subjects (n=26) performed a classification task. Results showed that larger ERP amplitudes were elicited by numbers with distance-close condition in comparison to distance-far condition in the VPP component over centro-frontal sites. Furthermore, the VPP latency varied as a function of the symbol type, but the N170 did not. Such results demonstrate that magnitude information access takes place as early as 150 ms after onset of visual number stimuli and the advantage of Arabic digits over verbal numbers should be localized to the VPP component. We establish the VPP component as a critical ERP component to report in studies of numerical cognition and our results call into question the N170/VPP association hypothesis and the serial-stage model of visual number comparison processing.
Using Visual Odometry to Estimate Position and Attitude
NASA Technical Reports Server (NTRS)
Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark
2007-01-01
A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.
Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma
2003-02-28
The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects.
Etheridge, Thomas J.; Boulineau, Rémi L.; Herbert, Alex; Watson, Adam T.; Daigaku, Yasukazu; Tucker, Jem; George, Sophie; Jönsson, Peter; Palayret, Matthieu; Lando, David; Laue, Ernest; Osborne, Mark A.; Klenerman, David; Lee, Steven F.; Carr, Antony M.
2014-01-01
Development of single-molecule localization microscopy techniques has allowed nanometre scale localization accuracy inside cells, permitting the resolution of ultra-fine cell structure and the elucidation of crucial molecular mechanisms. Application of these methodologies to understanding processes underlying DNA replication and repair has been limited to defined in vitro biochemical analysis and prokaryotic cells. In order to expand these techniques to eukaryotic systems, we have further developed a photo-activated localization microscopy-based method to directly visualize DNA-associated proteins in unfixed eukaryotic cells. We demonstrate that motion blurring of fluorescence due to protein diffusivity can be used to selectively image the DNA-bound population of proteins. We designed and tested a simple methodology and show that it can be used to detect changes in DNA binding of a replicative helicase subunit, Mcm4, and the replication sliding clamp, PCNA, between different stages of the cell cycle and between distinct genetic backgrounds. PMID:25106872
NASA Astrophysics Data System (ADS)
Sanghavi, Foram; Agaian, Sos
2017-05-01
The goal of this paper is to (a) test the nuclei based Computer Aided Cancer Detection system using Human Visual based system on the histopathology images and (b) Compare the results of the proposed system with the Local Binary Pattern and modified Fibonacci -p pattern systems. The system performance is evaluated using different parameters such as accuracy, specificity, sensitivity, positive predictive value, and negative predictive value on 251 prostate histopathology images. The accuracy of 96.69% was observed for cancer detection using the proposed human visual based system compared to 87.42% and 94.70% observed for Local Binary patterns and the modified Fibonacci p patterns.
Vollrath-Smith, Fiori R.; Shin, Rick
2011-01-01
Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820