ERIC Educational Resources Information Center
Meyer, Philip A.
Reported were four experiments which investigated the developmental and mental retardation aspects of an initial stage of visual information processing termed iconic memory. The stage was explained to involve processing visual stimuli prior to sensation and through to recognition. In three of the four experiments, the paradigm of visual masking…
Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios
2018-06-21
Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.
The theoretical cognitive process of visualization for science education.
Mnguni, Lindelani E
2014-01-01
The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question "how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.
Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk
2015-01-01
Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.
Modeling the role of parallel processing in visual search.
Cave, K R; Wolfe, J M
1990-04-01
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.
Numerosity processing in early visual cortex.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
2017-08-15
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
Saliency affects feedforward more than feedback processing in early visual cortex.
Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony
2013-07-01
Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.
Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing
2018-03-28
The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
ERP Evidence of Visualization at Early Stages of Visual Processing
ERIC Educational Resources Information Center
Page, Jonathan W.; Duhamel, Paul; Crognale, Michael A.
2011-01-01
Recent neuroimaging research suggests that early visual processing circuits are activated similarly during visualization and perception but have not demonstrated that the cortical activity is similar in character. We found functional equivalency in cortical activity by recording evoked potentials while color and luminance patterns were viewed and…
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
Early-Stage Visual Processing and Cortical Amplification Deficits in Schizophrenia
Butler, Pamela D.; Zemon, Vance; Schechter, Isaac; Saperstein, Alice M.; Hoptman, Matthew J.; Lim, Kelvin O.; Revheim, Nadine; Silipo, Gail; Javitt, Daniel C.
2005-01-01
Background Patients with schizophrenia show deficits in early-stage visual processing, potentially reflecting dysfunction of the magnocellular visual pathway. The magnocellular system operates normally in a nonlinear amplification mode mediated by glutamatergic (N-methyl-d-aspartate) receptors. Investigating magnocellular dysfunction in schizophrenia therefore permits evaluation of underlying etiologic hypotheses. Objectives To evaluate magnocellular dysfunction in schizophrenia, relative to known neurochemical and neuroanatomical substrates, and to examine relationships between electrophysiological and behavioral measures of visual pathway dysfunction and relationships with higher cognitive deficits. Design, Setting, and Participants Between-group study at an inpatient state psychiatric hospital and out-patient county psychiatric facilities. Thirty-three patients met DSM-IV criteria for schizophrenia or schizoaffective disorder, and 21 nonpsychiatric volunteers of similar ages composed the control group. Main Outcome Measures (1) Magnocellular and parvocellular evoked potentials, analyzed using nonlinear (Michaelis-Menten) and linear contrast gain approaches; (2) behavioral contrast sensitivity measures; (3) white matter integrity; (4) visual and nonvisual neuropsychological measures, and (5) clinical symptom and community functioning measures. Results Patients generated evoked potentials that were significantly reduced in response to magnocellular-biased, but not parvocellular-biased, stimuli (P=.001). Michaelis-Menten analyses demonstrated reduced contrast gain of the magnocellular system (P=.001). Patients showed decreased contrast sensitivity to magnocellular-biased stimuli (P<.001). Evoked potential deficits were significantly related to decreased white matter integrity in the optic radiations (P<.03). Evoked potential deficits predicted impaired contrast sensitivity (P=.002), which was in turn related to deficits in complex visual processing (P≤.04). Both evoked potential (P≤.04) and contrast sensitivity (P=.01) measures significantly predicted community functioning. Conclusions These findings confirm the existence of early-stage visual processing dysfunction in schizophrenia and provide the first evidence that such deficits are due to decreased nonlinear signal amplification, consistent with glutamatergic theories. Neuroimaging studies support the hypothesis of dysfunction within low-level visual pathways involving thalamocortical radiations. Deficits in early-stage visual processing significantly predict higher cognitive deficits. PMID:15867102
Role of temporal processing stages by inferior temporal neurons in facial recognition.
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.
Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition
Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji
2011-01-01
In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904
Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A
2015-01-01
Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.
Electrophysiological evidence for biased competition in V1 for fear expressions.
West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay
2011-11-01
When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.
The involvement of central attention in visual search is determined by task demands.
Han, Suk Won
2017-04-01
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.
Concept of Operations Visualization in Support of Ares I Production
NASA Technical Reports Server (NTRS)
Chilton, James H.; Smith, Daid Alan
2008-01-01
Boeing was selected in 2007 to manufacture Ares I Upper Stage and Instrument Unit according to NASA's design which would require the use of the latest manufacturing and integration processes to meet NASA budget and schedule targets. Past production experience has established that the majority of the life cycle cost is established during the initial design process. Concept of Operations (CONOPs) visualizations/simulations help to reduce life cycle cost during the early design stage. Production and operation visualizations can reduce tooling, factory capacity, safety, and build process risks while spreading program support across government, academic, media and public constituencies. The NASA/Boeing production visualization (DELMIA; Digital Enterprise Lean Manufacturing Interactive Application) promotes timely, concurrent and collaborative producibility analysis (Boeing)while supporting Upper Stage Design Cycles (NASA). The DELMIA CONOPs visualization reduced overall Upper Stage production flow time at the manufacturing facility by over 100 man-days to 312.5 man-days and helped to identify technical access issues. The NASA/Boeing Interactive Concept of Operations (ICON) provides interactive access to Ares using real mission parameters, allows users to configure the mission which encourages ownership and identifies areas for improvement, allows mission operations or spacecraft detail to be added as needed, and provides an effective, low coast advocacy, outreach and education tool.
Neural correlates of audiovisual integration in music reading.
Nichols, Emily S; Grahn, Jessica A
2016-10-01
Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806
Bottlenecks of motion processing during a visual glance: the leaky flask model.
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
Parallel processing of general and specific threat during early stages of perception
2016-01-01
Differential processing of threat can consummate as early as 100 ms post-stimulus. Moreover, early perception not only differentiates threat from non-threat stimuli but also distinguishes among discrete threat subtypes (e.g. fear, disgust and anger). Combining spatial-frequency-filtered images of fear, disgust and neutral scenes with high-density event-related potentials and intracranial source estimation, we investigated the neural underpinnings of general and specific threat processing in early stages of perception. Conveyed in low spatial frequencies, fear and disgust images evoked convergent visual responses with similarly enhanced N1 potentials and dorsal visual (middle temporal gyrus) cortical activity (relative to neutral cues; peaking at 156 ms). Nevertheless, conveyed in high spatial frequencies, fear and disgust elicited divergent visual responses, with fear enhancing and disgust suppressing P1 potentials and ventral visual (occipital fusiform) cortical activity (peaking at 121 ms). Therefore, general and specific threat processing operates in parallel in early perception, with the ventral visual pathway engaged in specific processing of discrete threats and the dorsal visual pathway in general threat processing. Furthermore, selectively tuned to distinctive spatial-frequency channels and visual pathways, these parallel processes underpin dimensional and categorical threat characterization, promoting efficient threat response. These findings thus lend support to hybrid models of emotion. PMID:26412811
Similarities in neural activations of face and Chinese character discrimination.
Liu, Jiangang; Tian, Jie; Li, Jun; Gong, Qiyong; Lee, Kang
2009-02-18
This study compared Chinese participants' visual discrimination of Chinese faces with that of Chinese characters, which are highly similar to faces on a variety of dimensions. Both Chinese faces and characters activated the bilateral middle fusiform with high levels of correlations. These findings suggest that although the expertise systems for faces and written symbols are known to be anatomically differentiated at the later stages of processing to serve face processing or written-symbol-specific processing purposes, they may share similar neural structures in the ventral occipitotemporal cortex at the stages of visual processing.
Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G
2015-04-01
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. Copyright © 2015 the authors 0270-6474/15/355351-09$15.00/0.
Binding and unbinding the auditory and visual streams in the McGurk effect.
Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc
2012-08-01
Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.
Visual attention spreads broadly but selects information locally.
Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro
2016-10-19
Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.
Emotional and movement-related body postures modulate visual processing
Borhani, Khatereh; Làdavas, Elisabetta; Maier, Martin E.; Avenanti, Alessio
2015-01-01
Human body postures convey useful information for understanding others’ emotions and intentions. To investigate at which stage of visual processing emotional and movement-related information conveyed by bodies is discriminated, we examined event-related potentials elicited by laterally presented images of bodies with static postures and implied-motion body images with neutral, fearful or happy expressions. At the early stage of visual structural encoding (N190), we found a difference in the sensitivity of the two hemispheres to observed body postures. Specifically, the right hemisphere showed a N190 modulation both for the motion content (i.e. all the observed postures implying body movements elicited greater N190 amplitudes compared with static postures) and for the emotional content (i.e. fearful postures elicited the largest N190 amplitude), while the left hemisphere showed a modulation only for the motion content. In contrast, at a later stage of perceptual representation, reflecting selective attention to salient stimuli, an increased early posterior negativity was observed for fearful stimuli in both hemispheres, suggesting an enhanced processing of motivationally relevant stimuli. The observed modulations, both at the early stage of structural encoding and at the later processing stage, suggest the existence of a specialized perceptual mechanism tuned to emotion- and action-related information conveyed by human body postures. PMID:25556213
Gordon, Barry
2018-01-01
Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513
Sung, Kyongje; Gordon, Barry
2018-01-01
Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.
Portella, Claudio; Machado, Sergio; Arias-Carrión, Oscar; Sack, Alexander T.; Silva, Julio Guilherme; Orsini, Marco; Leite, Marco Antonio Araujo; Silva, Adriana Cardoso; Nardi, Antonio E.; Cagy, Mauricio; Piedade, Roberto; Ribeiro, Pedro
2012-01-01
The brain is capable of elaborating and executing different stages of information processing. However, exactly how these stages are processed in the brain remains largely unknown. This study aimed to analyze the possible correlation between early and late stages of information processing by assessing the latency to, and amplitude of, early and late event-related potential (ERP) components, including P200, N200, premotor potential (PMP) and P300, in healthy participants in the context of a visual oddball paradigm. We found a moderate positive correlation among the latency of P200 (electrode O2), N200 (electrode O2), PMP (electrode C3), P300 (electrode PZ) and the reaction time (RT). In addition, moderate negative correlation between the amplitude of P200 and the latencies of N200 (electrode O2), PMP (electrode C3), P300 (electrode PZ) was found. Therefore, we propose that if the secondary processing of visual input (P200 latency) occurs faster, the following will also happen sooner: discrimination and classification process of this input (N200 latency), motor response processing (PMP latency), reorganization of attention and working memory update (P300 latency), and RT. N200, PMP, and P300 latencies are also anticipated when higher activation level of occipital areas involved in the secondary processing of visual input rise (P200 amplitude). PMID:23355929
Visual processing of music notation: a study of event-related potentials.
Lee, Horng-Yih; Wang, Yu-Sin
2011-04-01
In reading music, the acquisition of pitch information depends mostly on the spatial position of notes, hence more spatial processing, whereas the acquisition of temporal information depends mostly on the visual features of notes and object recognition. This study used both electrophysiological and behavioral methods to compare the processing of pitch and duration in reading single musical notes. It was observed that in the early stage of note reading, identification of pitch could elicit greater N1 and N2 amplitude than identification of duration at the parietal lobe electrodes. In the later stages of note reading, identifying pitch elicited a greater negative slow wave at parietal electrodes than did identifying note duration. The sustained contribution of parietal processes for pitch suggests that the dorsal pathway is essential for pitch processing. However, the duration task did not elicit greater amplitude of any early ERP components than the pitch task at temporal electrodes. Accordingly, a double dissociation, suggesting involvement of the dorsal visual stream, was not observed in spatial pitch processing and ventral visual stream in processing of note durations.
Cognitive load effects on early visual perceptual processing.
Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia
2018-05-01
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.
Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.
2015-01-01
Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644
Recurrent V1-V2 interaction in early visual boundary processing.
Neumann, H; Sepp, W
1999-11-01
A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.
Adaptation of velocity encoding in synaptically coupled neurons in the fly visual system.
Kalb, Julia; Egelhaaf, Martin; Kurtz, Rafael
2008-09-10
Although many adaptation-induced effects on neuronal response properties have been described, it is often unknown at what processing stages in the nervous system they are generated. We focused on fly visual motion-sensitive neurons to identify changes in response characteristics during prolonged visual motion stimulation. By simultaneous recordings of synaptically coupled neurons, we were able to directly compare adaptation-induced effects at two consecutive processing stages in the fly visual motion pathway. This allowed us to narrow the potential sites of adaptation effects within the visual system and to relate them to the properties of signal transfer between neurons. Motion adaptation was accompanied by a response reduction, which was somewhat stronger in postsynaptic than in presynaptic cells. We found that the linear representation of motion velocity degrades during adaptation to a white-noise velocity-modulated stimulus. This effect is caused by an increasingly nonlinear velocity representation rather than by an increase of noise and is similarly strong in presynaptic and postsynaptic neurons. In accordance with this similarity, the dynamics and the reliability of interneuronal signal transfer remained nearly constant. Thus, adaptation is mainly based on processes located in the presynaptic neuron or in more peripheral processing stages. In contrast, changes of transfer properties at the analyzed synapse or in postsynaptic spike generation contribute little to changes in velocity coding during motion adaptation.
Aging, selective attention, and feature integration.
Plude, D J; Doussard-Roosevelt, J A
1989-03-01
This study used feature-integration theory as a means of determining the point in processing at which selective attention deficits originate. The theory posits an initial stage of processing in which features are registered in parallel and then a serial process in which features are conjoined to form complex stimuli. Performance of young and older adults on feature versus conjunction search is compared. Analyses of reaction times and error rates suggest that elderly adults in addition to young adults, can capitalize on the early parallel processing stage of visual information processing, and that age decrements in visual search arise as a result of the later, serial stage of processing. Analyses of a third, unconfounded, conjunction search condition reveal qualitatively similar modes of conjunction search in young and older adults. The contribution of age-related data limitations is found to be secondary to the contribution of age decrements in selective attention.
Top-down knowledge modulates onset capture in a feedforward manner.
Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E
2017-04-01
How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.
Two-stage perceptual learning to break visual crowding.
Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang
2016-01-01
When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).
Comparing the visual spans for faces and letters
He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.
2015-01-01
The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858
Executive working memory load induces inattentional blindness.
Fougnie, Daryl; Marois, René
2007-02-01
When attention is engaged in a task, unexpected events in the visual scene may go undetected, a phenomenon known as inattentional blindness (IB). At what stage of information processing must attention be engaged for IB to occur? Although manipulations that tax visuospatial attention can induce IB, the evidence is more equivocal for tasks that engage attention at late, central stages of information processing. Here, we tested whether IB can be specifically induced by central executive processes. An unexpected visual stimulus was presented during the retention interval of a working memory task that involved either simply maintaining verbal material or rearranging the material into alphabetical order. The unexpected stimulus was more likely to be missed during manipulation than during simple maintenance of the verbal information. Thus, the engagement of executive processes impairs the ability to detect unexpected, task-irrelevant stimuli, suggesting that IB can result from central, amodal stages of processing.
Signal detection evidence for limited capacity in visual search
Fencsik, David E.; Flusberg, Stephen J.; Horowitz, Todd S.; Wolfe, Jeremy M.
2014-01-01
The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models. PMID:21901574
Visual processing deficits in 22q11.2 Deletion Syndrome.
Biria, Marjan; Tomescu, Miralena I; Custo, Anna; Cantonas, Lucia M; Song, Kun-Wei; Schneider, Maude; Murray, Micah M; Eliez, Stephan; Michel, Christoph M; Rihs, Tonia A
2018-01-01
Carriers of the rare 22q11.2 microdeletion present with a high percentage of positive and negative symptoms and a high genetic risk for schizophrenia. Visual processing impairments have been characterized in schizophrenia, but less so in 22q11.2 Deletion Syndrome (DS). Here, we focus on visual processing using high-density EEG and source imaging in 22q11.2DS participants (N = 25) and healthy controls (N = 26) with an illusory contour discrimination task. Significant differences between groups emerged at early and late stages of visual processing. In 22q11.2DS, we first observed reduced amplitudes over occipital channels and reduced source activations within dorsal and ventral visual stream areas during the P1 (100-125 ms) and within ventral visual cortex during the N1 (150-170 ms) visual evoked components. During a later window implicated in visual completion (240-285 ms), we observed an increase in global amplitudes in 22q11.2DS. The increased surface amplitudes for illusory contours at this window were inversely correlated with positive subscales of prodromal symptoms in 22q11.2DS. The reduced activity of ventral and dorsal visual areas during early stages points to an impairment in visual processing seen both in schizophrenia and 22q11.2DS. During intervals related to perceptual closure, the inverse correlation of high amplitudes with positive symptoms suggests that participants with 22q11.2DS who show an increased brain response to illusory contours during the relevant window for contour processing have less psychotic symptoms and might thus be at a reduced prodromal risk for schizophrenia.
Krajcovicova, Lenka; Barton, Marek; Elfmarkova-Nemcova, Nela; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2017-12-01
Visual processing difficulties are often present in Alzheimer's disease (AD), even in its pre-dementia phase (i.e. in mild cognitive impairment, MCI). The default mode network (DMN) modulates the brain connectivity depending on the specific cognitive demand, including visual processes. The aim of the present study was to analyze specific changes in connectivity of the posterior DMN node (i.e. the posterior cingulate cortex and precuneus, PCC/P) associated with visual processing in 17 MCI patients and 15 AD patients as compared to 18 healthy controls (HC) using functional magnetic resonance imaging. We used psychophysiological interaction (PPI) analysis to detect specific alterations in PCC connectivity associated with visual processing while controlling for brain atrophy. In the HC group, we observed physiological changes in PCC connectivity in ventral visual stream areas and with PCC/P during the visual task, reflecting the successful involvement of these regions in visual processing. In the MCI group, the PCC connectivity changes were disturbed and remained significant only with the anterior precuneus. In between-group comparison, we observed significant PPI effects in the right superior temporal gyrus in both MCI and AD as compared to HC. This change in connectivity may reflect ineffective "compensatory" mechanism present in the early pre-dementia stages of AD or abnormal modulation of brain connectivity due to the disease pathology. With the disease progression, these changes become more evident but less efficient in terms of compensation. This approach can separate the MCI from HC with 77% sensitivity and 89% specificity.
Koivisto, Mika; Kahila, Ella
2017-04-01
Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interactions between attention, context and learning in primary visual cortex.
Gilbert, C; Ito, M; Kapadia, M; Westheimer, G
2000-01-01
Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.
ERIC Educational Resources Information Center
Martens, Ulla; Hubner, Ronald
2013-01-01
While hemispheric differences in global/local processing have been reported by various studies, it is still under dispute at which processing stage they occur. Primarily, it was assumed that these asymmetries originate from an early perceptual stage. Instead, the content-level binding theory (Hubner & Volberg, 2005) suggests that the hemispheres…
Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.
Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas
2017-01-01
In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
Visual White Matter Integrity in Schizophrenia
Butler, Pamela D.; Hoptman, Matthew J.; Nierenberg, Jay; Foxe, John J.; Javitt, Daniel C.; Lim, Kelvin O.
2007-01-01
Objective Patients with schizophrenia have visual-processing deficits. This study examines visual white matter integrity as a potential mechanism for these deficits. Method Diffusion tensor imaging was used to examine white matter integrity at four levels of the visual system in 17 patients with schizophrenia and 21 comparison subjects. The levels examined were the optic radiations, the striate cortex, the inferior parietal lobule, and the fusiform gyrus. Results Schizophrenia patients showed a significant decrease in fractional anisotropy in the optic radiations but not in any other region. Conclusions This finding indicates that white matter integrity is more impaired at initial input, rather than at higher levels of the visual system, and supports the hypothesis that visual-processing deficits occur at the early stages of processing. PMID:17074957
Preserved figure-ground segregation and symmetry perception in visual neglect.
Driver, J; Baylis, G C; Rafal, R D
1992-11-05
A central controversy in current research on visual attention is whether figures are segregated from their background preattentively, or whether attention is first directed to unstructured regions of the image. Here we present neurological evidence for the former view from studies of a brain-injured patient with visual neglect. His attentional impairment arises after normal segmentation of the image into figures and background has taken place. Our results indicate that information which is neglected and unavailable to higher levels of visual processing can nevertheless be processed by earlier stages in the visual system concerned with segmentation.
Tracking the first two seconds: three stages of visual information processing?
Jacob, Jane; Breitmeyer, Bruno G; Treviño, Melissa
2013-12-01
We compared visual priming and comparison tasks to assess information processing of a stimulus during the first 2 s after its onset. In both tasks, a 13-ms prime was followed at varying SOAs by a 40-ms probe. In the priming task, observers identified the probe as rapidly and accurately as possible; in the comparison task, observers determined as rapidly and accurately as possible whether or not the probe and prime were identical. Priming effects attained a maximum at an SOA of 133 ms and then declined monotonically to zero by 700 ms, indicating reliance on relatively brief visuosensory (iconic) memory. In contrast, the comparison effects yielded a multiphasic function, showing a maximum at 0 ms followed by a minimum at 133 ms, followed in turn by a maximum at 240 ms and another minimum at 720 ms, and finally a third maximum at 1,200 ms before declining thereafter. The results indicate three stages of prime processing that we take to correspond to iconic visible persistence, iconic informational persistence, and visual working memory, with the first two used in the priming task and all three in the comparison task. These stages are related to stages presumed to underlie stimulus processing in other tasks, such as those giving rise to the attentional blink.
Zhu, Chuanlin; He, Weiqi; Qi, Zhengyang; Wang, Lili; Song, Dongqing; Zhan, Lei; Yi, Shengnan; Luo, Yuejia; Luo, Wenbo
2015-01-01
The present study recorded event-related potentials using rapid serial visual presentation paradigm to explore the time course of emotionally charged pictures. Participants completed a dual-target task as quickly and accurately as possible, in which they were asked to judge the gender of the person depicted (task 1) and the valence (positive, neutral, or negative) of the given picture (task 2). The results showed that the amplitudes of the P2 component were larger for emotional pictures than they were for neutral pictures, and this finding represents brain processes that distinguish emotional stimuli from non-emotional stimuli. Furthermore, positive, neutral, and negative pictures elicited late positive potentials with different amplitudes, implying that the differences between emotions are recognized. Additionally, the time course for emotional picture processing was consistent with the latter two stages of a three-stage model derived from studies on emotional facial expression processing and emotional adjective processing. The results of the present study indicate that in the three-stage model of emotion processing, the middle and late stages are more universal and stable, and thus occur at similar time points when using different stimuli (faces, words, or scenes). PMID:26217276
Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai
2011-01-01
Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340
Sensory Contributions to Impaired Emotion Processing in Schizophrenia
Butler, Pamela D.; Abeles, Ilana Y.; Weiskopf, Nicole G.; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E.; Zemon, Vance; Loughead, James; Gur, Ruben C.; Javitt, Daniel C.
2009-01-01
Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective. PMID:19793797
Sensory contributions to impaired emotion processing in schizophrenia.
Butler, Pamela D; Abeles, Ilana Y; Weiskopf, Nicole G; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E; Zemon, Vance; Loughead, James; Gur, Ruben C; Javitt, Daniel C
2009-11-01
Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective.
Spatial resolution in visual memory.
Ben-Shalom, Asaf; Ganel, Tzvi
2015-04-01
Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; de Vries, Jade G; Cohen, Michael X; Lamme, Victor A F
2015-12-01
Evidence is accumulating that the classic two-stage model of visual STM (VSTM), comprising iconic memory (IM) and visual working memory (WM), is incomplete. A third memory stage, termed fragile VSTM (FM), seems to exist in between IM and WM [Vandenbroucke, A. R. E., Sligte, I. G., & Lamme, V. A. F. Manipulations of attention dissociate fragile visual STM from visual working memory. Neuropsychologia, 49, 1559-1568, 2011; Sligte, I. G., Scholte, H. S., & Lamme, V. A. F. Are there multiple visual STM stores? PLoS One, 3, e1699, 2008]. Although FM can be distinguished from IM using behavioral and fMRI methods, the question remains whether FM is a weak expression of WM or a separate form of memory with its own neural signature. Here, we tested whether FM and WM in humans are supported by dissociable time-frequency features of EEG recordings. Participants performed a partial-report change detection task, from which individual differences in FM and WM capacity were estimated. These individual FM and WM capacities were correlated with time-frequency characteristics of the EEG signal before and during encoding and maintenance of the memory display. FM capacity showed negative alpha correlations over peri-occipital electrodes, whereas WM capacity was positively related, suggesting increased visual processing (lower alpha) to be related to FM capacity. Furthermore, FM capacity correlated with an increase in theta power over central electrodes during preparation and processing of the memory display, whereas WM did not. In addition to a difference in visual processing characteristics, a positive relation between gamma power and FM capacity was observed during both preparation and maintenance periods of the task. On the other hand, we observed that theta-gamma coupling was negatively correlated with FM capacity, whereas it was slightly positively correlated with WM. These data show clear differences in the neural substrates of FM versus WM and suggest that FM depends more on visual processing mechanisms compared with WM. This study thus provides novel evidence for a dissociation between different stages in VSTM.
Top-down modulation of ventral occipito-temporal responses during visual word recognition.
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
2011-04-01
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
Digital holographic interferometry applied to the investigation of ignition process.
Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B
2017-06-12
We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Brightness masking is modulated by disparity structure.
Pelekanos, Vassilis; Ban, Hiroshi; Welchman, Andrew E
2015-05-01
The luminance contrast at the borders of a surface strongly influences surface's apparent brightness, as demonstrated by a number of classic visual illusions. Such phenomena are compatible with a propagation mechanism believed to spread contrast information from borders to the interior. This process is disrupted by masking, where the perceived brightness of a target is reduced by the brief presentation of a mask (Paradiso & Nakayama, 1991), but the exact visual stage that this happens remains unclear. In the present study, we examined whether brightness masking occurs at a monocular-, or a binocular-level of the visual hierarchy. We used backward masking, whereby a briefly presented target stimulus is disrupted by a mask coming soon afterwards, to show that brightness masking is affected by binocular stages of the visual processing. We manipulated the 3-D configurations (slant direction) of the target and mask and measured the differential disruption that masking causes on brightness estimation. We found that the masking effect was weaker when stimuli had a different slant. We suggest that brightness masking is partly mediated by mid-level neuronal mechanisms, at a stage where binocular disparity edge structure has been extracted. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
Electrostimulation mapping of comprehension of auditory and visual words.
Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François
2015-10-01
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visual dysfunction in Parkinson’s disease
Weil, Rimona S.; Schrag, Anette E.; Warren, Jason D.; Crutch, Sebastian J.; Lees, Andrew J.; Morris, Huw R.
2016-01-01
Patients with Parkinson’s disease have a number of specific visual disturbances. These include changes in colour vision and contrast sensitivity and difficulties with complex visual tasks such as mental rotation and emotion recognition. We review changes in visual function at each stage of visual processing from retinal deficits, including contrast sensitivity and colour vision deficits to higher cortical processing impairments such as object and motion processing and neglect. We consider changes in visual function in patients with common Parkinson’s disease-associated genetic mutations including GBA and LRRK2. We discuss the association between visual deficits and clinical features of Parkinson’s disease such as rapid eye movement sleep behavioural disorder and the postural instability and gait disorder phenotype. We review the link between abnormal visual function and visual hallucinations, considering current models for mechanisms of visual hallucinations. Finally, we discuss the role of visuo-perceptual testing as a biomarker of disease and predictor of dementia in Parkinson’s disease. PMID:27412389
Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei
2017-01-01
Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.
Multiplexing in the primate motion pathway.
Huk, Alexander C
2012-06-01
This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.
The effects of bilateral presentations on lateralized lexical decision.
Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran
2007-06-01
We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the stage of the process in which the distractor is affecting the decision about the target; and third, to determine whether the interaction between the lexicality of the target and the lexicality of the distractor ("lexical redundancy effect") is due to facilitation or inhibition of lexical processing. Unilateral and bilateral trials were presented in separate blocks. Target stimuli were always underlined. Regarding our first goal, we found that bilateral presentations (a) increased the effect of visual hemifield of presentation (right visual field advantage) for words by slowing down the processing of word targets presented to the left visual field, and (b) produced an interaction between visual hemifield of presentation (VF) and target lexicality (TLex), which implies the use of different strategies by the two hemispheres in lexical processing. For our second goal of determining the processing stage that is affected by the distractor, we introduced a third condition in which targets were always accompanied by "perceptual" distractors consisting of sequences of the letter "x" (e.g., xxxx). Performance on these trials indicated that most of the interaction occurs during lexical access (after basic perceptual analysis but before response programming). Finally, a comparison between performance patterns on the trials containing perceptual and lexical distractors indicated that the lexical redundancy effect is mainly due to inhibition of word processing by pseudoword distractors.
Motion processing with two eyes in three dimensions.
Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2011-02-11
The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.
Evaluation of hides and leather using ultrasonic technology
USDA-ARS?s Scientific Manuscript database
Hides are visually inspected and ranked for quality and sale price. Because visual inspection is not reliable for detecting defects when hair is present, hides cannot be effectively sorted at the earliest stage of processing. Furthermore, this subjective assessment is non-uniform among operators, ...
Early access to abstract representations in developing readers: evidence from masked priming.
Perea, Manuel; Mallouh, Reem Abu; Carreiras, Manuel
2013-07-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing - as measured by masked priming - in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.- [ktz b-ktA b] - note that the three initial letters are connected in prime and target) than for those that do not (- [ktxb-ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. -) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. © 2013 Blackwell Publishing Ltd.
Query-Driven Visualization and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Bethel, E. Wes; Prabhat, Mr.
2012-11-01
This report focuses on an approach to high performance visualization and analysis, termed query-driven visualization and analysis (QDV). QDV aims to reduce the amount of data that needs to be processed by the visualization, analysis, and rendering pipelines. The goal of the data reduction process is to separate out data that is "scientifically interesting'' and to focus visualization, analysis, and rendering on that interesting subset. The premise is that for any given visualization or analysis task, the data subset of interest is much smaller than the larger, complete data set. This strategy---extracting smaller data subsets of interest and focusing ofmore » the visualization processing on these subsets---is complementary to the approach of increasing the capacity of the visualization, analysis, and rendering pipelines through parallelism. This report discusses the fundamental concepts in QDV, their relationship to different stages in the visualization and analysis pipelines, and presents QDV's application to problems in diverse areas, ranging from forensic cybersecurity to high energy physics.« less
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2009-01-01
We present an MEG study of heteronym recognition, aiming to distinguish between two theories of lexical access: the "early access" theory, which entails that lexical access occurs at early (pre 200 ms) stages of processing, and the "late access" theory, which interprets this early activity as orthographic word-form identification rather than…
Enhancing Manufacturing Process Education via Computer Simulation and Visualization
ERIC Educational Resources Information Center
Manohar, Priyadarshan A.; Acharya, Sushil; Wu, Peter
2014-01-01
Industrially significant metal manufacturing processes such as melting, casting, rolling, forging, machining, and forming are multi-stage, complex processes that are labor, time, and capital intensive. Academic research develops mathematical modeling of these processes that provide a theoretical framework for understanding the process variables…
Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.
Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R
2001-11-01
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.
ERIC Educational Resources Information Center
Essley, Roger
2005-01-01
Essley was a "different learner," and now he works in schools showing teachers how visual/verbal tools can help all students, including their "different learners," succeed. One valuable tool is storyboarding, a process by which students build a story through visual stages--drafts, conferences, revisions--before writing even begins. Essley shares…
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.
2018-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426
Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A
2017-01-01
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Schwaibold, M; Schöller, B; Penzel, T; Bolz, A
2001-05-01
We describe a novel approach to the problem of automated sleep stage recognition. The ARTISANA algorithm mimics the behaviour of a human expert visually scoring sleep stages (Rechtschaffen and Kales classification). It comprises a number of interacting components that imitate the stepwise approach of the human expert, and artificial intelligence components. On the basis of parameters extracted at 1-s intervals from the signal curves, artificial neural networks recognize the incidence of typical patterns, e.g. delta activity or K complexes. This is followed by a rule interpretation stage that identifies the sleep stage with the aid of a neuro-fuzzy system while taking account of the context. Validation studies based on the records of 8 patients with obstructive sleep apnoea have confirmed the potential of this approach. Further features of the system include the transparency of the decision-taking process, and the flexibility of the option for expanding the system to cover new patterns and criteria.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
Forder, Lewis; He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing.
He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426
The Audio-Visual Marketing Handbook for Independent Schools.
ERIC Educational Resources Information Center
Griffith, Tom
This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…
The impact of hunger on food cue processing: an event-related brain potential study.
Stockburger, Jessica; Schmälzle, Ralf; Flaisch, Tobias; Bublatzky, Florian; Schupp, Harald T
2009-10-01
The present study used event-related brain potentials to examine deprivation effects on visual attention to food stimuli at the level of distinct processing stages. Thirty-two healthy volunteers (16 females) were tested twice 1 week apart, either after 24 h of food deprivation or after normal food intake. Participants viewed a continuous stream of food and flower images while dense sensor ERPs were recorded. As revealed by distinct ERP modulations in relatively earlier and later time windows, deprivation affected the processing of food and flower pictures. Between 300 and 360 ms, food pictures were associated with enlarged occipito-temporal negativity and centro-parietal positivity in deprived compared to satiated state. Of main interest, in a later time window (approximately 450-600 ms), deprivation increased amplitudes of the late positive potential elicited by food pictures. Conversely, flower processing varied by motivational state with decreased positive potentials in the deprived state. Minimum-Norm analyses provided further evidence that deprivation enhanced visual attention to food cues in later processing stages. From the perspective of motivated attention, hunger may induce a heightened state of attention for food stimuli in a processing stage related to stimulus recognition and focused attention.
Sensory system plasticity in a visually specialized, nocturnal spider.
Stafstrom, Jay A; Michalik, Peter; Hebets, Eileen A
2017-04-21
The interplay between an animal's environmental niche and its behavior can influence the evolutionary form and function of its sensory systems. While intraspecific variation in sensory systems has been documented across distant taxa, fewer studies have investigated how changes in behavior might relate to plasticity in sensory systems across developmental time. To investigate the relationships among behavior, peripheral sensory structures, and central processing regions in the brain, we take advantage of a dramatic within-species shift of behavior in a nocturnal, net-casting spider (Deinopis spinosa), where males cease visually-mediated foraging upon maturation. We compared eye diameters and brain region volumes across sex and life stage, the latter through micro-computed X-ray tomography. We show that mature males possess altered peripheral visual morphology when compared to their juvenile counterparts, as well as juvenile and mature females. Matching peripheral sensory structure modifications, we uncovered differences in relative investment in both lower-order and higher-order processing regions in the brain responsible for visual processing. Our study provides evidence for sensory system plasticity when individuals dramatically change behavior across life stages, uncovering new avenues of inquiry focusing on altered reliance of specific sensory information when entering a new behavioral niche.
Theoretical approaches to lightness and perception.
Gilchrist, Alan
2015-01-01
Theories of lightness, like theories of perception in general, can be categorized as high-level, low-level, and mid-level. However, I will argue that in practice there are only two categories: one-stage mid-level theories, and two-stage low-high theories. Low-level theories usually include a high-level component and high-level theories include a low-level component, the distinction being mainly one of emphasis. Two-stage theories are the modern incarnation of the persistent sensation/perception dichotomy according to which an early experience of raw sensations, faithful to the proximal stimulus, is followed by a process of cognitive interpretation, typically based on past experience. Like phlogiston or the ether, raw sensations seem like they must exist, but there is no clear evidence for them. Proximal stimulus matches are postperceptual, not read off an early sensory stage. Visual angle matches are achieved by a cognitive process of flattening the visual world. Likewise, brightness (luminance) matches depend on a cognitive process of flattening the illumination. Brightness is not the input to lightness; brightness is slower than lightness. Evidence for an early (< 200 ms) mosaic stage is shaky. As for cognitive influences on perception, the many claims tend to fall apart upon close inspection of the evidence. Much of the evidence for the current revival of the 'new look' is probably better explained by (1) a natural desire of (some) subjects to please the experimenter, and (2) the ease of intuiting an experimental hypothesis. High-level theories of lightness are overkill. The visual system does not need to know the amount of illumination, merely which surfaces share the same illumination. This leaves mid-level theories derived from the gestalt school. Here the debate seems to revolve around layer models and framework models. Layer models fit our visual experience of a pattern of illumination projected onto a pattern of reflectance, while framework models provide a better account of illusions and failures of constancy. Evidence for and against these approaches is reviewed.
Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.
Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit
2015-09-09
Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.
A Process Model for the Comprehension of Organic Chemistry Notation
ERIC Educational Resources Information Center
Havanki, Katherine L.
2012-01-01
This dissertation examines the cognitive processes individuals use when reading organic chemistry equations and factors that affect these processes, namely, visual complexity of chemical equations and participant characteristics (expertise, spatial ability, and working memory capacity). A six stage process model for the comprehension of organic…
Mecklinger, Axel; Kriukova, Olga; Mühlmann, Heiner; Grunwald, Thomas
2014-01-01
Visual object identification is modulated by perceptual experience. In a cross-cultural ERP study we investigated whether cultural expertise determines how buildings that vary in their ranking between high and low according to the Western architectural decorum are perceived. Two groups of German and Chinese participants performed an object classification task in which high- and low-ranking Western buildings had to be discriminated from everyday life objects. ERP results indicate that an early stage of visual object identification (i.e., object model selection) is facilitated for high-ranking buildings for the German participants, only. At a later stage of object identification, in which object knowledge is complemented by information from semantic and episodic long-term memory, no ERP evidence for cultural differences was obtained. These results suggest that the identification of architectural ranking is modulated by culturally specific expertise with Western-style architecture already at an early processing stage.
Visual Masking in Schizophrenia: Overview and Theoretical Implications
Green, Michael F.; Lee, Junghee; Wynn, Jonathan K.; Mathis, Kristopher I.
2011-01-01
Visual masking provides several key advantages for exploring the earliest stages of visual processing in schizophrenia: it allows for control over timing at the millisecond level, there are several well-supported theories of the underlying neurobiology of visual masking, and it is amenable to examination by electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI). In this paper, we provide an overview of the visual masking impairment schizophrenia, including the relevant theoretical mechanisms for masking impairment. We will discuss its relationship to clinical symptoms, antipsychotic medications, diagnostic specificity, and presence in at-risk populations. As part of this overview, we will cover the neural correlates of visual masking based on recent findings from EEG and fMRI. Finally, we will suggest a possible mechanism that could explain the patterns of masking findings and other visual processing findings in schizophrenia. PMID:21606322
Creative user-centered visualization design for energy analysts and modelers.
Goodwin, Sarah; Dykes, Jason; Jones, Sara; Dillingham, Iain; Dove, Graham; Duffy, Alison; Kachkaev, Alexander; Slingsby, Aidan; Wood, Jo
2013-12-01
We enhance a user-centered design process with techniques that deliberately promote creativity to identify opportunities for the visualization of data generated by a major energy supplier. Visualization prototypes developed in this way prove effective in a situation whereby data sets are largely unknown and requirements open - enabling successful exploration of possibilities for visualization in Smart Home data analysis. The process gives rise to novel designs and design metaphors including data sculpting. It suggests: that the deliberate use of creativity techniques with data stakeholders is likely to contribute to successful, novel and effective solutions; that being explicit about creativity may contribute to designers developing creative solutions; that using creativity techniques early in the design process may result in a creative approach persisting throughout the process. The work constitutes the first systematic visualization design for a data rich source that will be increasingly important to energy suppliers and consumers as Smart Meter technology is widely deployed. It is novel in explicitly employing creativity techniques at the requirements stage of visualization design and development, paving the way for further use and study of creativity methods in visualization design.
Design, Control and in Situ Visualization of Gas Nitriding Processes
Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy
2010-01-01
The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536
Joo, Sung Jun; White, Alex L; Strodtman, Douglas J; Yeatman, Jason D
2018-06-01
Reading is a complex process that involves low-level visual processing, phonological processing, and higher-level semantic processing. Given that skilled reading requires integrating information among these different systems, it is likely that reading difficulty-known as dyslexia-can emerge from impairments at any stage of the reading circuitry. To understand contributing factors to reading difficulties within individuals, it is necessary to diagnose the function of each component of the reading circuitry. Here, we investigated whether adults with dyslexia who have impairments in visual processing respond to a visual manipulation specifically targeting their impairment. We collected psychophysical measures of visual crowding and tested how each individual's reading performance was affected by increased text-spacing, a manipulation designed to alleviate severe crowding. Critically, we identified a sub-group of individuals with dyslexia showing elevated crowding and found that these individuals read faster when text was rendered with increased letter-, word- and line-spacing. Our findings point to a subtype of dyslexia involving elevated crowding and demonstrate that individuals benefit from interventions personalized to their specific impairments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Emotional words facilitate lexical but not early visual processing.
Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M
2015-12-12
Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.
Three Stages and Two Systems of Visual Processing
1989-01-01
as squaring do not, in and of themselves, imply second- order processing . For example, the Adelson and Bergen’s (1985) detector of directional motion...rectification, halfwave rectification is a second- order processing scheme. Figure 8. Stimuli for analyzing second- order processing . (a) An x,y,t representation of
Visual form-processing deficits: a global clinical classification.
Unzueta-Arce, J; García-García, R; Ladera-Fernández, V; Perea-Bartolomé, M V; Mora-Simón, S; Cacho-Gutiérrez, J
2014-10-01
Patients who have difficulties recognising visual form stimuli are usually labelled as having visual agnosia. However, recent studies let us identify different clinical manifestations corresponding to discrete diagnostic entities which reflect a variety of deficits along the continuum of cortical visual processing. We reviewed different clinical cases published in medical literature as well as proposals for classifying deficits in order to provide a global perspective of the subject. Here, we present the main findings on the neuroanatomical basis of visual form processing and discuss the criteria for evaluating processing which may be abnormal. We also include an inclusive diagram of visual form processing deficits which represents the different clinical cases described in the literature. Lastly, we propose a boosted decision tree to serve as a guide in the process of diagnosing such cases. Although the medical community largely agrees on which cortical areas and neuronal circuits are involved in visual processing, future studies making use of new functional neuroimaging techniques will provide more in-depth information. A well-structured and exhaustive assessment of the different stages of visual processing, designed with a global view of the deficit in mind, will give a better idea of the prognosis and serve as a basis for planning personalised psychostimulation and rehabilitation strategies. Copyright © 2011 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
ERIC Educational Resources Information Center
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2011-01-01
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich,…
A comprehensive approach to visual resource management for highway agencies
William G. E. Blair; Larry Isaacson; Grant R. Jones
1979-01-01
To help ensure that visual effects are considered at all stages of highway agency decision-making, the Federal Highway Administration contracted with Jones & Jones to develop and conduct a five-day training course to guide highway professionals in developing VRM processes for their own agencies.The training course emphasizes overall principles...
Prediction and constraint in audiovisual speech perception
Peelle, Jonathan E.; Sommers, Mitchell S.
2015-01-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi
Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.
Benefits of interhemispheric integration on the Japanese Kana script-matching tasks.
Yoshizaki, K; Tsuji, Y
2000-02-01
We tested Banich's hypothesis that the benefits of bihemispheric processing were enhanced as task complexity increased, when some procedural shortcomings in the previous studies were overcome by using Japanese Kana script-matching tasks. In Exp. 1, the 20 right-handed subjects were given the Physical-Identity task (Katakana-Katakana scripts matching) and the Name-Identity task (Katakana-Hiragana scripts matching). On both tasks, a pair of Kana scripts was tachistoscopically presented in the left, right, and bilateral visual fields. Distractor stimuli were also presented with target Kana scripts on both tasks to equate the processing load between the hemispheres. Analysis showed that, while a bilateral visual-field advantage was found on the name-identity task, a unilateral visual-field advantage was found on the physical-identity task, suggesting that, as the computational complexity of the encoding stage was enhanced, the benefits of bilateral hemispheric processing increased. In Exp. 2, the 16 right-handed subjects were given the same physical-identity task as in Exp. 1, except Hiragana scripts were used as distractors instead of digits to enhance task difficulty. Analysis showed no differences in performance between the unilateral and bilateral visual fields. Taking into account these results of physical-identity tasks for both Exps. 1 and 2, enhancing task demand in the stage of ignoring distractors made the unilateral visual-field advantage obtained in Exp. 1 disappear in Exp. 2. These results supported Banich's hypothesis.
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
2014-01-01
Background Neurofibromatosis type 1 (NF1) affects several areas of cognitive function including visual processing and attention. We investigated the neural mechanisms underlying the visual deficits of children and adolescents with NF1 by studying visual evoked potentials (VEPs) and brain oscillations during visual stimulation and rest periods. Methods Electroencephalogram/event-related potential (EEG/ERP) responses were measured during visual processing (NF1 n = 17; controls n = 19) and idle periods with eyes closed and eyes open (NF1 n = 12; controls n = 14). Visual stimulation was chosen to bias activation of the three detection mechanisms: achromatic, red-green and blue-yellow. Results We found significant differences between the groups for late chromatic VEPs and a specific enhancement in the amplitude of the parieto-occipital alpha amplitude both during visual stimulation and idle periods. Alpha modulation and the negative influence of alpha oscillations in visual performance were found in both groups. Conclusions Our findings suggest abnormal later stages of visual processing and enhanced amplitude of alpha oscillations supporting the existence of deficits in basic sensory processing in NF1. Given the link between alpha oscillations, visual perception and attention, these results indicate a neural mechanism that might underlie the visual sensitivity deficits and increased lapses of attention observed in individuals with NF1. PMID:24559228
Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G
2000-06-01
Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Direction selectivity of blowfly motion-sensitive neurons is computed in a two-stage process.
Borst, A; Egelhaaf, M
1990-01-01
Direction selectivity of motion-sensitive neurons is generally thought to result from the nonlinear interaction between the signals derived from adjacent image points. Modeling of motion-sensitive networks, however, reveals that such elements may still respond to motion in a rather poor directionally selective way. Direction selectivity can be significantly enhanced if the nonlinear interaction is followed by another processing stage in which the signals of elements with opposite preferred directions are subtracted from each other. Our electrophysiological experiments in the fly visual system suggest that here direction selectivity is acquired in such a two-stage process. Images PMID:2251278
Peters, Judith C; Vlamings, Petra; Kemner, Chantal
2013-05-01
Face perception in adults depends on skilled processing of interattribute distances ('configural' processing), which is disrupted for faces presented in inverted orientation (face inversion effect or FIE). Children are not proficient in configural processing, and this might relate to an underlying immaturity to use facial information in low spatial frequency (SF) ranges, which capture the coarse information needed for configural processing. We hypothesized that during adolescence a shift from use of high to low SF information takes place. Therefore, we studied the influence of SF content on neural face processing in groups of children (9-10 years), adolescents (14-15 years) and young adults (21-29 years) by measuring event-related potentials (ERPs) to upright and inverted faces which varied in SF content. Results revealed that children show a neural FIE in early processing stages (i.e. P1; generated in early visual areas), suggesting a superficial, global facial analysis. In contrast, ERPs of adults revealed an FIE at later processing stages (i.e. N170; generated in face-selective, higher visual areas). Interestingly, adolescents showed FIEs in both processing stages, suggesting a hybrid developmental stage. Furthermore, adolescents and adults showed FIEs for stimuli containing low SF information, whereas such effects were driven by both low and high SF information in children. These results indicate that face processing has a protracted maturational course into adolescence, and is dependent on changes in SF processing. During adolescence, sensitivity to configural cues is developed, which aids the fast and holistic processing that is so special for faces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Visualization and quantification of three-dimensional distribution of yeast in bread dough.
Maeda, Tatsuro; DO, Gab-Soo; Sugiyama, Junichi; Araki, Tetsuya; Tsuta, Mizuki; Shiraga, Seizaburo; Ueda, Mitsuyoshi; Yamada, Masaharu; Takeya, Koji; Sagara, Yasuyuki
2009-07-01
A three-dimensional (3-D) bio-imaging technique was developed for visualizing and quantifying the 3-D distribution of yeast in frozen bread dough samples in accordance with the progress of the mixing process of the samples, applying cell-surface engineering to the surfaces of the yeast cells. The fluorescent yeast was recognized as bright spots at the wavelength of 520 nm. Frozen dough samples were sliced at intervals of 1 microm by an micro-slicer image processing system (MSIPS) equipped with a fluorescence microscope for acquiring cross-sectional images of the samples. A set of successive two-dimensional images was reconstructed to analyze the 3-D distribution of the yeast. The average shortest distance between centroids of enhanced green fluorescent protein (EGFP) yeasts was 10.7 microm at the pick-up stage, 9.7 microm at the clean-up stage, 9.0 microm at the final stage, and 10.2 microm at the over-mixing stage. The results indicated that the distribution of the yeast cells was the most uniform in the dough of white bread at the final stage, while the heterogeneous distribution at the over-mixing stage was possibly due to the destruction of the gluten network structure within the samples.
Synaptic physiology of the flow of information in the cat's visual cortex in vivo
Hirsch, Judith A; Martinez, Luis M; Alonso, José-Manuel; Desai, Komal; Pillai, Cinthi; Pierre, Carhine
2002-01-01
Each stage of the striate cortical circuit extracts novel information about the visual environment. We asked if this analytic process reflected laminar variations in synaptic physiology by making whole-cell recording with dye-filled electrodes from the cat's visual cortex and thalamus; the stimuli were flashed spots. Thalamic afferents terminate in layer 4, which contains two types of cell, simple and complex, distinguished by the spatial structure of the receptive field. Previously, we had found that the postsynaptic and spike responses of simple cells reliably followed the time course of flash-evoked thalamic activity. Here we report that complex cells in layer 4 (or cells intermediate between simple and complex) similarly reprised thalamic activity (response/trial, 99 ± 1.9 %; response duration 159 ± 57 ms; latency 25 ± 4 ms; average ± standard deviation; n = 7). Thus, all cells in layer 4 share a common synaptic physiology that allows secure integration of thalamic input. By contrast, at the second cortical stage (layer 2+3), where layer 4 directs its output, postsynaptic responses did not track simple patterns of antecedent activity. Typical responses to the static stimulus were intermittent and brief (response/trial, 31 ± 40 %; response duration 72 ± 60 ms, latency 39 ± 7 ms; n = 11). Only richer stimuli like those including motion evoked reliable responses. All told, the second level of cortical processing differs markedly from the first. At that later stage, ascending information seems strongly gated by connections between cortical neurons. Inputs must be combined in newly specified patterns to influence intracortical stages of processing. PMID:11927691
Advanced Parkinson disease patients have impairment in prosody processing.
Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão
2016-01-01
The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.
Neural time course of visually enhanced echo suppression.
Bishop, Christopher W; London, Sam; Miller, Lee M
2012-10-01
Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.
Spatiotemporal Dynamics of Bilingual Word Processing
Leonard, Matthew K.; Brown, Timothy T.; Travis, Katherine E.; Gharapetian, Lusineh; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2009-01-01
Studies with monolingual adults have identified successive stages occurring in different brain regions for processing single written words. We combined magnetoencephalography and magnetic resonance imaging to compare these stages between the first (L1) and second (L2) languages in bilingual adults. L1 words in a size judgment task evoked a typical left-lateralized sequence of activity first in ventral occipitotemporal cortex (VOT: previously associated with visual word-form encoding), and then ventral frontotemporal regions (associated with lexico-semantic processing). Compared to L1, words in L2 activated right VOT more strongly from ~135 ms; this activation was attenuated when words became highly familiar with repetition. At ~400ms, L2 responses were generally later than L1, more bilateral, and included the same lateral occipitotemporal areas as were activated by pictures. We propose that acquiring a language involves the recruitment of right hemisphere and posterior visual areas that are not necessary once fluency is achieved. PMID:20004256
Early visual processing is enhanced in the midluteal phase of the menstrual cycle.
Lusk, Bethany R; Carr, Andrea R; Ranson, Valerie A; Bryant, Richard A; Felmingham, Kim L
2015-12-01
Event-related potential (ERP) studies have revealed an early attentional bias in processing unpleasant emotional images in women. Recent neuroimaging data suggests there are significant differences in cortical emotional processing according to menstrual phase. This study examined the impact of menstrual phase on visual emotional processing in women compared to men. ERPs were recorded from 28 early follicular women, 29 midluteal women, and 27 men while they completed a passive viewing task of neutral and low- and high- arousing pleasant and unpleasant images. There was a significant effect of menstrual phase in early visual processing, as midluteal women displayed significantly greater P1 amplitude at occipital regions to all visual images compared to men. Both midluteal and early follicular women displayed larger N1 amplitudes than men (although this only reached significance for the midluteal group) to the visual images. No sex or menstrual phase differences were apparent in later N2, P3, or LPP. A condition effect demonstrated greater P3 and LPP amplitude to highly-arousing unpleasant images relative to all other stimuli conditions. These results indicate that women have greater early automatic visual processing compared to men, and suggests that this effect is particularly strong in women in the midluteal phase at the earliest stage of visual attention processing. Our findings highlight the importance of considering menstrual phase when examining sex differences in the cortical processing of visual stimuli. Copyright © 2015 Elsevier Ltd. All rights reserved.
Matin, L; Li, W
2001-10-01
An individual line or a combination of lines viewed in darkness has a large influence on the elevation to which an observer sets a target so that it is perceived to lie at eye level (VPEL). These influences are systematically related to the orientation of pitched-from-vertical lines on pitched plane(s) and to the lengths of the lines, as well as to the orientations of lines of 'equivalent pitch' that lie on frontoparallel planes. A three-stage model processes the visual influence: The first stage parallel processes the orientations of the lines utilizing 2 classes of orientation-sensitive neural units in each hemisphere, with the two classes sensitive to opposing ranges of orientations; the signal delivered by each class is of opposite sign in the two hemispheres. The second stage generates the total visual influence from the parallel combination of inputs delivered by the 4 groups of the first stage, and a third stage combines the total visual influence from the second stage with signals from the body-referenced mechanism that contains information about the position and orientation of the eyes, head, and body. The circuit equation describing the combined influence of n separate inputs from stage 1 on the output of the stage 2 integrating neuron is derived for n stimulus lines which possess any combination of orientations and lengths; Each of the n lines is assumed to stimulate one of the groups of orientation-sensitive units in visual cortex (stage 1) whose signals converge on to a dendrite of the integrating neuron (stage 2), and to produce changes in postsynaptic membrane conductance (g(i)) and potential (V(i)) there. The net current from the n dendrites results in a voltage change (V(A)) at the initial segment of the axon of the integrating neuron. Nerve impulse frequency proportional to this voltage change signals the total visual influence on perceived elevation of the visual field. The circuit equation corresponding to the total visual influence for n equal length inducing lines is V(A)= sum V(i)/[n+(g(A)/g(S))], where the potential change due to line i, V(i), is proportional to line orientation, g(A) is the conductance at the axon's summing point, and g(S)=g(i) for each i for the equal length case; the net conductance change due to a line is proportional to the line's length. The circuit equation is interpreted as a basis for quantitative predictions from the model that can be compared to psychophysical measurements of the elevation of VPEL. The interpretation provides the predicted relation for the visual influence on VPEL, V, by n inducing lines each with length l: thus, V=a+[k(i) sum theta(i)/n+(k(2)/l)], where theta(i) is the orientation of line i, a is the effect of the body-referenced mechanism, and k(1) and k(2) are constants. The model's output is fitted to the results of five sets of experiments in which the elevation of VPEL measured with a small target in the median plane is systematically influenced by distantly located 1-line or 2-line inducing stimuli varying in orientation and length and viewed in otherwise total darkness with gaze restricted to the median plane; each line is located at either 25 degrees eccentricity to the left or right of the median plane. The model predicts the negatively accelerated growth of VPEL with line length for each orientation and the change of slope constant of the linear combination rule among lines from 1.00 (linear summation; short lines) to 0.61 (near-averaging; long lines). Fits to the data are obtained over a range of orientations from -30 degrees to +30 degrees of pitch for 1-line visual fields from lengths of 3 degrees to 64 degrees, for parallel 2-line visual fields over the same range of lengths and orientations, for short and long 2-line combinations in which each of the two members may have any orientation (parallel or nonparallel pairs), and for the well-illuminated and fully structured pitchroom. In addition, similar experiments with 2-line stimuli of equivalent pitch in the frontoparallel plane were also fitted to the model. The model accounts for more than 98% of the variance of the results in each case.
Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe
Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan
2006-01-01
The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158
Boshkovikj, Veselin; Fluke, Christopher J; Crawford, Russell J; Ivanova, Elena P
2014-02-28
There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a 'creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The 'Dynamics' and 'nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.
NASA Astrophysics Data System (ADS)
Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.
2014-02-01
There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a `creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The `Dynamics' and `nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.
Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.
2014-01-01
There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a ‘creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The ‘Dynamics' and ‘nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices. PMID:24577105
The effects of visual search efficiency on object-based attention
Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene
2017-01-01
The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192
Prediction and constraint in audiovisual speech perception.
Peelle, Jonathan E; Sommers, Mitchell S
2015-07-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.
2013-01-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196
Visualization and Measurement of Multiple Components of the Autophagy Flux.
Evans, Tracey; Button, Robert; Anichtchik, Oleg; Luo, Shouqing
2018-06-24
Autophagy is an intracellular degradation process that mediates the clearance of cytoplasmic components. As well as being an important function for cellular homeostasis, autophagy also promotes the removal of aberrant protein accumulations, such as those seen in conditions like neurodegeneration. The dynamic nature of autophagy requires precise methods to examine the process at multiple stages. The protocols described herein enable the dissection of the complete autophagy process (the "autophagy flux"). These allow for the elucidation of which stages of autophagy may be altered in response to various diseases and treatments.
[Cortical potentials evoked to response to a signal to make a memory-guided saccade].
Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V
2010-01-01
The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.
Effects of feature-selective and spatial attention at different stages of visual processing.
Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M
2011-01-01
We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
Oluk, Can; Pavan, Andrea; Kafaligonul, Hulusi
2016-01-01
At the early stages of visual processing, information is processed by two major thalamic pathways encoding brightness increments (ON) and decrements (OFF). Accumulating evidence suggests that these pathways interact and merge as early as in primary visual cortex. Using regular and reverse-phi motion in a rapid adaptation paradigm, we investigated the temporal dynamics of within and across pathway mechanisms for motion processing. When the adaptation duration was short (188 ms), reverse-phi and regular motion led to similar adaptation effects, suggesting that the information from the two pathways are combined efficiently at early-stages of motion processing. However, as the adaption duration was increased to 752 ms, reverse-phi and regular motion showed distinct adaptation effects depending on the test pattern used, either engaging spatiotemporal correlation between the same or opposite contrast polarities. Overall, these findings indicate that spatiotemporal correlation within and across ON-OFF pathways for motion processing can be selectively adapted, and support those models that integrate within and across pathway mechanisms for motion processing. PMID:27667401
Fisher, Katie; Towler, John; Eimer, Martin
2016-01-08
It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Brain Organization of the Preparation for Visual Recognition in Preadolescent Children].
Farber, D A; Kurganskii, A V; Petrenko, N E
2015-01-01
The brain organization of the process of preparation for the perception of incomplete images fragmented to different extents. The functional connections of ventrolateral and dorsoventral cortical zones with other zones in 10-11-year-old and 11-12-year-old children were studied at three successive stages of the preparation for the perception of incomplete images. These data were compared with those obtained for adults. In order to reveal the effect of preparatory processes on the image recognition, we also analyzed the regional event-related potentials. In adults, the functional interaction between dorsolateral and ventrolateral prefrontal cortex and other cortical zones of the right hemisphere was found to be enhanced at the stage of waiting for not-yet-recognizable image, while in the left hemisphere the links became stronger shortly before the successful recognition of a stimulus. In children the stage-related changes in functional interactions are similar in both hemispheres, with peak of interaction activity.at the stage preceding the successful recognition. It was found that in 11-12-year-old children the ventrolateral cortex is involved in both preparatory stage and recognition processes to a smaller extent as compared with adults and 10-11-year-old children. At the same time, the group of 11-12-year-old children had more mature pattern of the dorsolateral cortex involvement, which provided more effective recognition of incomplete images in this group as compared with 10-11-year-old children. It is suggested that the features of the brain organization of visual recognition and preceding preparatory processes in 11-12-year-old children are caused by multidirectional effects of sex hormones on the functioning of different zones of the prefrontal cortex at early stages of sexual maturation.
Schwitzer, Thomas; Schwan, Raymund; Angioi-Duprez, Karine; Ingster-Moati, Isabelle; Lalanne, Laurence; Giersch, Anne; Laprevote, Vincent
2015-01-01
Cannabis is one of the most prevalent drugs used worldwide. Regular cannabis use is associated with impairments in highly integrative cognitive functions such as memory, attention and executive functions. To date, the cerebral mechanisms of these deficits are still poorly understood. Studying the processing of visual information may offer an innovative and relevant approach to evaluate the cerebral impact of exogenous cannabinoids on the human brain. Furthermore, this knowledge is required to understand the impact of cannabis intake in everyday life, and especially in car drivers. Here we review the role of the endocannabinoids in the functioning of the visual system and the potential involvement of cannabis use in visual dysfunctions. This review describes the presence of the endocannabinoids in the critical stages of visual information processing, and their role in the modulation of visual neurotransmission and visual synaptic plasticity, thereby enabling them to alter the transmission of the visual signal. We also review several induced visual changes, together with experimental dysfunctions reported in cannabis users. In the discussion, we consider these results in relation to the existing literature. We argue for more involvement of public health research in the study of visual function in cannabis users, especially because cannabis use is implicated in driving impairments. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.
Language Proficiency Modulates the Recruitment of Non-Classical Language Areas in Bilinguals
Leonard, Matthew K.; Torres, Christina; Travis, Katherine E.; Brown, Timothy T.; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2011-01-01
Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing. PMID:21455315
Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam
2011-08-03
Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.
Wynn, Jonathan K.; Lee, Junghee; Horan, William P.; Green, Michael F.
2008-01-01
Schizophrenia patients show impairments in identifying facial affect; however, it is not known at what stage facial affect processing is impaired. We evaluated 3 event-related potentials (ERPs) to explore stages of facial affect processing in schizophrenia patients. Twenty-six schizophrenia patients and 27 normal controls participated. In separate blocks, subjects identified the gender of a face, the emotion of a face, or if a building had 1 or 2 stories. Three ERPs were examined: (1) P100 to examine basic visual processing, (2) N170 to examine facial feature encoding, and (3) N250 to examine affect decoding. Behavioral performance on each task was also measured. Results showed that schizophrenia patients’ P100 was comparable to the controls during all 3 identification tasks. Both patients and controls exhibited a comparable N170 that was largest during processing of faces and smallest during processing of buildings. For both groups, the N250 was largest during the emotion identification task and smallest for the building identification task. However, the patients produced a smaller N250 compared with the controls across the 3 tasks. The groups did not differ in behavioral performance in any of the 3 identification tasks. The pattern of intact P100 and N170 suggest that patients maintain basic visual processing and facial feature encoding abilities. The abnormal N250 suggests that schizophrenia patients are less efficient at decoding facial affect features. Our results imply that abnormalities in the later stage of feature decoding could potentially underlie emotion identification deficits in schizophrenia. PMID:18499704
Liu, B; Wang, Z; Wu, G; Meng, X
2011-04-28
In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Working memory-driven attention improves spatial resolution: Support for perceptual enhancement.
Pan, Yi; Luo, Qianying; Cheng, Min
2016-08-01
Previous research has indicated that attention can be biased toward those stimuli matching the contents of working memory and thereby facilitates visual processing at the location of the memory-matching stimuli. However, whether this working memory-driven attentional modulation takes place on early perceptual processes remains unclear. Our present results showed that working memory-driven attention improved identification of a brief Landolt target presented alone in the visual field. Because the suprathreshold target appeared without any external noise added (i.e., no distractors or masks), the results suggest that working memory-driven attention enhances the target signal at early perceptual stages of visual processing. Furthermore, given that performance in the Landolt target identification task indexes spatial resolution, this attentional facilitation indicates that working memory-driven attention can boost early perceptual processing via enhancement of spatial resolution at the attended location.
Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Welch, Robert B.
1994-01-01
Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.
Rhythmic Oscillations of Visual Contrast Sensitivity Synchronized with Action
Tomassini, Alice; Spinelli, Donatella; Jacono, Marco; Sandini, Giulio; Morrone, Maria Concetta
2016-01-01
It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, ~500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop. PMID:25948254
Feature integration and object representations along the dorsal stream visual hierarchy
Perry, Carolyn Jeane; Fallah, Mazyar
2014-01-01
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147
Müller-Oehring, Eva M; Schulte, Tilman; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V
2013-01-01
Decline in visuospatial abilities with advancing age has been attributed to a demise of bottom-up and top-down functions involving sensory processing, selective attention, and executive control. These functions may be differentially affected by age-related volume shrinkage of subcortical and cortical nodes subserving the dorsal and ventral processing streams and the corpus callosum mediating interhemispheric information exchange. Fifty-five healthy adults (25-84 years) underwent structural MRI and performed a visual search task to test perceptual and attentional demands by combining feature-conjunction searches with "gestalt" grouping and attentional cueing paradigms. Poorer conjunction, but not feature, search performance was related to older age and volume shrinkage of nodes in the dorsolateral processing stream. When displays allowed perceptual grouping through distractor homogeneity, poorer conjunction-search performance correlated with smaller ventrolateral prefrontal cortical and callosal volumes. An alerting cue attenuated age effects on conjunction search, and the alertness benefit was associated with thalamic, callosal, and temporal cortex volumes. Our results indicate that older adults can capitalize on early parallel stages of visual information processing, whereas age-related limitations arise at later serial processing stages requiring self-guided selective attention and executive control. These limitations are explained in part by age-related brain volume shrinkage and can be mitigated by external cues.
Visual search, visual streams, and visual architectures.
Green, M
1991-10-01
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.
Vigilance and iconic memory in children at high risk for alcoholism.
Steinhauer, S R; Locke, J; Hill, S Y
1997-07-01
Previous studies report reduced visual event-related potential (ERP) amplitudes in young males at high risk for alcoholism. These findings could involve difficulties at several stages of visual processing. This study was aimed at examining vigilance performance and iconic memory functions in children at high risk or low risk for alcoholism. Sustained vigilance and retrieval from iconic memory were evaluated in 54 (29 male) white children at high risk and 47 (25 male) white children at low risk for developing alcoholism. Children were also grouped according to gender and age (younger: 8-12 years; older: 13-18 years). No differences is visual sensitivity, response criterion or reaction time were associated with risk status on the degraded visual stimulus version of the Continuous Performance Test. For the Span of Apprehension, no differences were found due to risk status when only 1 or 5 distractors were presented, although with 9 distractors a significant effect of risk status was found when it was tested as an interaction with gender and age (decreased accuracy for older high-risk boys compared to older low-risk boys). These findings suggest that ERP deviations are not attributable to stages of visual processing deficits, but represent difficulty involving more complex utilization of information. Implications of these results are that the differences between high- and low-risk children that have been reported previously for visual ERP components (e.g., P300) are not attributable to deficits of attentional or iconic memory mechanisms.
Daenen, Liesbeth; Nijs, Jo; Roussel, Nathalie; Wouters, Kristien; Cras, Patrick
2012-01-01
Sensory and motor system dysfunctions have been documented in a proportion of patients with acute whiplash associated disorders (WAD). Sensorimotor incongruence may occur and hence, may explain pain and other sensations in the acute stage after the trauma. The present study aimed at (1) evaluating whether a visually mediated incongruence between sensory feedback and motor output increases symptoms and triggers additional sensations in patients with acute WAD, (2) investigating whether the pattern of sensations in response to sensorimotor incongruence differs among patients suffering from acute and chronic WAD, and healthy controls. Experimental study. Patients with acute WAD were recruited within one month after whiplash injury via the emergency department of a local Red Cross medical care unit, the Antwerp University Hospital, and through primary care practices. Patients with chronic WAD were recruited through an advertisement on the World Wide Web and from the medical database of a local Red Cross medical care unit. Healthy controls were recruited from among the university college staff, family members, and acquaintances of the researchers. Thirty patients with acute WAD, 35 patients with chronic WAD, and 31 healthy persons were subjected to a coordination test. They performed congruent and incongruent arm movements while viewing a whiteboard or mirror. RESULTS. Twenty-eight patients with acute WAD reported sensations such as pain, tightness, feeling of peculiarity, and tiredness at some stage of the test protocol. No significant differences in frequencies and intensities of sensations were found between the various test stages (P > .05). Significantly more sensations were reported during the incongruent mirror stage compared to the incongruent control stage (P < .05). The pattern in intensity of sensations across the congruent and incongruent stages was significantly different between the WAD groups and the control group. The course and prognostic value of susceptibility to sensorimotor incongruence after an acute whiplash trauma are not yet clear from these results. A prospective longitudinal study with an expanded study population is needed to investigate if those with a lowered threshold to visually mediated sensorimotor incongruence in the acute stage are at risk to develop persistent pain and disability. Patients with acute WAD present an exacerbation of symptoms and additional sensations in response to visually mediated changes during action. These results indicate an altered perception of distorted visual feedback and suggest altered central sensorimotor nervous system processing in patients with acute WAD.
Knowledge is power: how conceptual knowledge transforms visual cognition.
Collins, Jessica A; Olson, Ingrid R
2014-08-01
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Knowledge is Power: How Conceptual Knowledge Transforms Visual Cognition
Collins, Jessica A.; Olson, Ingrid R.
2014-01-01
In this review we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks demonstrating interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are to our understanding of the visual environment, and demonstrate the need for future research aimed at understanding how such interactions arise in the brain. PMID:24402731
Sleep-waking cycle in the cerveau isolé cat.
Slósarska, M; Zernicki, B
1973-06-01
The experiments were performed on ten chronic low cerveau isolé cats: in eight cats the brain stem transection was prepontine and in two cats, intercollicular. The preparations survived from 24 to 3 days. During 24-36 hr sessions the ECoG activity was continuously recorded, and the ocular and ECoG components of the orienting reflexes to visual and olfactory stimuli were studied. 2. Three periods can be recognized in the recovery process of the low cerveau isolé cat. They are called acute, early chronic and late chronic stages. The acute stage lasts 1 day and the early chronic stage seems to last 3 weeks at least. During the acute stage the ability to desynchronize the EEG, either spontaneously or in response to sensory stimulations, is dramatically impaired and the pupils are fissurated. Thus the cat is comatous. 4. During the early chronic stage, although the ECoG synchronization-desynchronization cycle and the associated fissurated myosis-myosis cycle already exist, the episodes of ECoG desynchronization occupy only a small percentage of time and usually develop slowly. Visual and olfactory stimuli are often ineffective. Thus the cat is semicomatous. In the late chronic stage the sleep-waking cycle is present. The animal can be easily awakened by visual and olfactory stimuli. The intensity of the ECoG arousal to visual stimuli and the distribution of time between alert wakefulness, drowsiness, light synchronized sleep and deep synchronized sleep are similar to those in the chronic pretrigeminal cat. The recovery of the cerveau isolé seems to reach a steady level when the sleep-waking cycle becomes similar to that present in the chronic pretrigeminal cat. During the whole survival period the vertical following reflex is abortive.
Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried
2013-01-01
Humans are social beings and often have to perceive and perform within groups. In conflict situations, this puts them under pressure to either adhere to the group opinion or to risk controversy with the group. Psychological experiments have demonstrated that study participants adapt to erroneous group opinions in visual perception tasks, which they can easily solve correctly when performing on their own. Until this point, however, it is unclear whether this phenomenon of social conformity influences early stages of perception that might not even reach awareness or later stages of conscious decision-making. Using electroencephalography, this study has revealed that social conformity to the wrong group opinion resulted in a decrease of the posterior-lateral P1 in line with a decrease of the later centro-parietal P3. These results suggest that group pressure situations impact early unconscious visual perceptual processing, which results in a later diminished stimulus discrimination and an adaptation even to the wrong group opinion. These findings might have important implications for understanding social behavior in group settings and are discussed within the framework of social influence on eyewitness testimony.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
Fixational Eye Movements in the Earliest Stage of Metazoan Evolution
Bielecki, Jan; Høeg, Jens T.; Garm, Anders
2013-01-01
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur. PMID:23776673
Fixational eye movements in the earliest stage of metazoan evolution.
Bielecki, Jan; Høeg, Jens T; Garm, Anders
2013-01-01
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.
Sörqvist, Patrik; Stenfelt, Stefan; Rönnberg, Jerker
2012-11-01
Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.
A neural model of the temporal dynamics of figure-ground segregation in motion perception.
Raudies, Florian; Neumann, Heiko
2010-03-01
How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.
Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall
Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat
2014-01-01
Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085
Tebartz van Elst, Ludger; Bach, Michael; Blessing, Julia; Riedel, Andreas; Bubl, Emanuel
2015-01-01
A common neurodevelopmental disorder, autism spectrum disorder (ASD), is defined by specific patterns in social perception, social competence, communication, highly circumscribed interests, and a strong subjective need for behavioral routines. Furthermore, distinctive features of visual perception, such as markedly reduced eye contact and a tendency to focus more on small, visual items than on holistic perception, have long been recognized as typical ASD characteristics. Recent debate in the scientific community discusses whether the physiology of low-level visual perception might explain such higher visual abnormalities. While reports of this enhanced, "eagle-like" visual acuity contained methodological errors and could not be substantiated, several authors have reported alterations in even earlier stages of visual processing, such as contrast perception and motion perception at the occipital cortex level. Therefore, in this project, we have investigated the electrophysiology of very early visual processing by analyzing the pattern electroretinogram-based contrast gain, the background noise amplitude, and the psychophysical visual acuities of participants with high-functioning ASD and controls with equal education. Based on earlier findings, we hypothesized that alterations in early vision would be present in ASD participants. This study included 33 individuals with ASD (11 female) and 33 control individuals (12 female). The groups were matched in terms of age, gender, and education level. We found no evidence of altered electrophysiological retinal contrast processing or psychophysical measured visual acuities. There appears to be no evidence for abnormalities in retinal visual processing in ASD patients, at least with respect to contrast detection.
Paradoxical Long-Timespan Opening of the Hole in Self-Supported Water Films of Nanometer Thickness.
Barkay, Z; Bormashenko, E
2017-05-16
The opening of holes in self-supported thin (nanoscaled) water films has been investigated in situ with the environmental scanning electron microscope. The opening of a hole occurs within a two-stage process. In the first stage, the rim surrounding a hole is formed, resembling the process that is observed under the puncturing of soap bubbles. In the second stage, the exponential growth of the hole is observed, with a characteristic time of a dozen seconds. We explain the exponential kinetics of hole growth by the balance between inertia (gravity) and viscous dissipation. The kinetics of opening a microscaled hole is governed by the processes taking place in the nanothick bulk of the self-supported liquid film. Nanoparticles provide markers for the visualization of the processes occurring in self-supported thin nanoscale liquid films.
Does the reading of different orthographies produce distinct brain activity patterns? An ERP study.
Bar-Kochva, Irit; Breznitz, Zvia
2012-01-01
Orthographies vary in the degree of transparency of spelling-sound correspondence. These range from shallow orthographies with transparent grapheme-phoneme relations, to deep orthographies, in which these relations are opaque. Only a few studies have examined whether orthographic depth is reflected in brain activity. In these studies a between-language design was applied, making it difficult to isolate the aspect of orthographic depth. In the present work this question was examined using a within-subject-and-language investigation. The participants were speakers of Hebrew, as they are skilled in reading two forms of script transcribing the same oral language. One form is the shallow pointed script (with diacritics), and the other is the deep unpointed script (without diacritics). Event-related potentials (ERPs) were recorded while skilled readers carried out a lexical decision task in the two forms of script. A visual non-orthographic task controlled for the visual difference between the scripts (resulting from the addition of diacritics to the pointed script only). At an early visual-perceptual stage of processing (~165 ms after target onset), the pointed script evoked larger amplitudes with longer latencies than the unpointed script at occipital-temporal sites. However, these effects were not restricted to orthographic processing, and may therefore have reflected, at least in part, the visual load imposed by the diacritics. Nevertheless, the results implied that distinct orthographic processing may have also contributed to these effects. At later stages (~340 ms after target onset) the unpointed script elicited larger amplitudes than the pointed one with earlier latencies. As this latency has been linked to orthographic-linguistic processing and to the classification of stimuli, it is suggested that these differences are associated with distinct lexical processing of a shallow and a deep orthography.
Yang, Limin; Chen, Yuanyuan; Yu, Zhengze; Pan, Wei; Wang, Hongyu; Li, Na; Tang, Bo
2017-08-23
Autophagy and apoptosis are closely associated with various pathological and physiological processes in cell cycles. Investigating the dynamic changes of intracellular active molecules in autophagy and apoptosis is of great significance for clarifying their inter-relationship and regulating mechanism in many diseases. In this study, we develop a dual-ratiometric fluorescent nanoprobe for quantitatively differentiating the dynamic process of superoxide anion (O 2 •- ) and pH changes in autophagy and apoptosis in HeLa cells. A rhodamine B-loaded mesoporous silica core was used as the reference, and fluorescence probes for pH and O 2 •- measurement were doped in the outer layer shell of SiO 2 . Then, chitosan and triphenylphosphonium were modified on the surface of SiO 2 . The experimental results showed that the nanoprobe is able to simultaneously and precisely visualize the changes of mitochondrial O 2 •- and pH in HeLa cells. The kinetics data revealed that the changes of pH and O 2 •- during autophagy and apoptosis in HeLa cells were significantly different. The pH value was decreased at the early stage of apoptosis and autophagy, whereas the O 2 •- level was enhanced at the early stage of apoptosis and almost unchanged at the initial stage of autophagy. At the late stage of apoptosis and autophagy, the concentration of O 2 •- was increased, whereas the pH was decreased at the late stage of autophagy and almost unchanged at the late stage of apoptosis. We hope that the present results provide useful information for studying the effects of O 2 •- and pH in autophagy and apoptosis in various pathological conditions and diseases.
A review of visual memory capacity: Beyond individual items and towards structured representations
Brady, Timothy F.; Konkle, Talia; Alvarez, George A.
2012-01-01
Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system - going beyond quantifying how many items can be remembered, and moving towards structured representations - but how we model memory systems and memory processes. PMID:21617025
Dynamic functional brain networks involved in simple visual discrimination learning.
Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis
2014-10-01
Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Information extraction during simultaneous motion processing.
Rideaux, Reuben; Edwards, Mark
2014-02-01
When confronted with multiple moving objects the visual system can process them in two stages: an initial stage in which a limited number of signals are processed in parallel (i.e. simultaneously) followed by a sequential stage. We previously demonstrated that during the simultaneous stage, observers could discriminate between presentations containing up to 5 vs. 6 spatially localized motion signals (Edwards & Rideaux, 2013). Here we investigate what information is actually extracted during the simultaneous stage and whether the simultaneous limit varies with the detail of information extracted. This was achieved by measuring the ability of observers to extract varied information from low detail, i.e. the number of signals presented, to high detail, i.e. the actual directions present and the direction of a specific element, during the simultaneous stage. The results indicate that the resolution of simultaneous processing varies as a function of the information which is extracted, i.e. as the information extraction becomes more detailed, from the number of moving elements to the direction of a specific element, the capacity to process multiple signals is reduced. Thus, when assigning a capacity to simultaneous motion processing, this must be qualified by designating the degree of information extraction. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
[Symptoms and lesion localization in visual agnosia].
Suzuki, Kyoko
2004-11-01
There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
The multisensory function of the human primary visual cortex.
Murray, Micah M; Thelen, Antonia; Thut, Gregor; Romei, Vincenzo; Martuzzi, Roberto; Matusz, Pawel J
2016-03-01
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan
2015-08-01
Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.
Papera, Massimiliano; Richards, Anne
2016-05-01
Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.
Hasanov, Samir; Demirkilinc Biler, Elif; Acarer, Ahmet; Akkın, Cezmi; Colakoglu, Zafer; Uretmen, Onder
2018-05-09
To evaluate and follow-up of functional and morphological changes of the optic nerve and ocular structures prospectively in patients with early-stage Parkinson's disease. Nineteen patients with a diagnosis of early-stage Parkinson's disease and 19 age-matched healthy controls were included in the study. All participants were examined minimum three times at the intervals of at least 6 month following initial examination. Pattern visually evoked potentials (VEP), contrast sensitivity assessments at photopic conditions, color vision tests with Ishihara cards and full-field visual field tests were performed in addition to measurement of retinal nerve fiber layer (RNFL) thickness of four quadrants (top, bottom, nasal, temporal), central and mean macular thickness and macular volumes. Best corrected visual acuity was observed significantly lower in study group within all three examinations. Contrast sensitivity values of the patient group were significantly lower in all spatial frequencies. P100 wave latency of VEP was significantly longer, and amplitude was lower in patient group; however, significant deterioration was not observed during the follow-up. Although average peripapillary RNFL thickness was not significant between groups, RNFL thickness in the upper quadrant was thinner in the patient group. While there was no difference in terms of mean macular thickness and total macular volume values between the groups initially, a significant decrease occurred in the patient group during the follow-up. During the initial and follow-up process, a significant deterioration in visual field was observed in the patient group. Structural and functional disorders shown as electro-physiologically and morphologically exist in different parts of visual pathways in early-stage Parkinson's disease.
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
Closed head injury and perceptual processing in dual-task situations.
Hein, G; Schubert, T; von Cramon, D Y
2005-01-01
Using a classical psychological refractory period (PRP) paradigm we investigated whether increased interference between dual-task input processes is one possible source of dual-task deficits in patients with closed-head injury (CHI). Patients and age-matched controls were asked to give speeded motor reactions to an auditory and a visual stimulus. The perceptual difficulty of the visual stimulus was manipulated by varying its intensity. The results of Experiment 1 showed that CHI patients suffer from increased interference between dual-task input processes, which is related to the salience of the visual stimulus. A second experiment indicated that this input interference may be specific to brain damage following CHI. It is not evident in other groups of neurological patients like Parkinson's disease patients. We conclude that the non-interfering processing of input stages in dual-tasks requires cognitive control. A decline in the control of input processes should be considered as one source of dual-task deficits in CHI patients.
Hou, X R; Qin, J Y; Ren, Z Q
2017-02-11
Objective: To investigate the rationality of visual field morphological stages of glaucoma, its relationship with visual field index and their diagnostic value. Methods: Retrospective series case study. Two hundred and seventy-four glaucoma patients and 100 normal control received visual field examination by Humphrey perimeter using standard automatic perimetry (SAP) program from March 2014 to September 2014. Glaucoma patients were graded into four stages according to characteristic morphological damage of visual field, distribution of mean defect (MD) and visual field index (VFI) of each stage were plotted and receiver operation characteristic curve (ROC) was used to explore its correlation with MD and VFI. The diagnostic value of MD and VFI was also compared. For the comparison of general data of subjects, categorical variables were compared using χ(2) test, numerical variables were compared using F test. MD and VFI were compared using ANOVA among stages according to visual field, followed by multiple comparisons using LSD method. The correlation between MD and VFI and different stages according to visual field defined their diagnostic value, and compared using area under the curve (AUC) of ROC. Results: No characteristic visual field damage was found in normal control group, and MD and VFI was (-0.06±1.24) dB and (99.15±0.76)%, respectively. Glaucomatous visual field damage was graded into early, medium, late and end stage according to morphological characteristic. MD for each stage were (-2.83±2.00) dB, (-9.70±3.68) dB, (-18.46±2.90) dB, and (-27.96±2.76) dB, respectively. VFI for each stage were (93.84±3.61)%, (75.16±10.85)%, (49.36±11.26)% and (17.65±10.59)%, respectively. MD and VFI of each stage of glaucomatous group and normal control group were all significantly different ( F= 1 165.53 and P <0.01 for MD; F= 1 028.04 and P <0.01 for VFI). AUC of ROC was A(MD)=0.91 and Se(MD)=0.01 (95% confident interval was 0.89-0.94) for MD, and A(VFI)=0.97, Se(VFI)=0.01 (95% confident interval was 0.94-0.10) for VFI. So, AUC(VFI)>AUC(MD) ( P< 0.05). Conclusions: It is feasible and rational of glaucomatous visual field damage to be graded into early, medium, late and end stage using Humphrey perimeter. Distribution of MD and VFI for each stage was relatively concentrative. Both MD and VFI were useful for grading glaucomatous visual field damage with preference for VFI. (Chin J Ophthalmol, 2017, 53: 92-97) .
Fandom Biases Retrospective Judgments Not Perception.
Huff, Markus; Papenmeier, Frank; Maurer, Annika E; Meitz, Tino G K; Garsoffky, Bärbel; Schwan, Stephan
2017-02-24
Attitudes and motivations have been shown to affect the processing of visual input, indicating that observers may see a given situation each literally in a different way. Yet, in real-life, processing information in an unbiased manner is considered to be of high adaptive value. Attitudinal and motivational effects were found for attention, characterization, categorization, and memory. On the other hand, for dynamic real-life events, visual processing has been found to be highly synchronous among viewers. Thus, while in a seminal study fandom as a particularly strong case of attitudes did bias judgments of a sports event, it left the question open whether attitudes do bias prior processing stages. Here, we investigated influences of fandom during the live TV broadcasting of the 2013 UEFA-Champions-League Final regarding attention, event segmentation, immediate and delayed cued recall, as well as affect, memory confidence, and retrospective judgments. Even though we replicated biased retrospective judgments, we found that eye-movements, event segmentation, and cued recall were largely similar across both groups of fans. Our findings demonstrate that, while highly involving sports events are interpreted in a fan dependent way, at initial stages they are processed in an unbiased manner.
Fandom Biases Retrospective Judgments Not Perception
Huff, Markus; Papenmeier, Frank; Maurer, Annika E.; Meitz, Tino G. K.; Garsoffky, Bärbel; Schwan, Stephan
2017-01-01
Attitudes and motivations have been shown to affect the processing of visual input, indicating that observers may see a given situation each literally in a different way. Yet, in real-life, processing information in an unbiased manner is considered to be of high adaptive value. Attitudinal and motivational effects were found for attention, characterization, categorization, and memory. On the other hand, for dynamic real-life events, visual processing has been found to be highly synchronous among viewers. Thus, while in a seminal study fandom as a particularly strong case of attitudes did bias judgments of a sports event, it left the question open whether attitudes do bias prior processing stages. Here, we investigated influences of fandom during the live TV broadcasting of the 2013 UEFA-Champions-League Final regarding attention, event segmentation, immediate and delayed cued recall, as well as affect, memory confidence, and retrospective judgments. Even though we replicated biased retrospective judgments, we found that eye-movements, event segmentation, and cued recall were largely similar across both groups of fans. Our findings demonstrate that, while highly involving sports events are interpreted in a fan dependent way, at initial stages they are processed in an unbiased manner. PMID:28233877
Timing the impact of literacy on visual processing
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460
Timing the impact of literacy on visual processing.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
2014-12-09
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
A Multi-Stage Model for Fundamental Functional Properties in Primary Visual Cortex
Hesam Shariati, Nastaran; Freeman, Alan W.
2012-01-01
Many neurons in mammalian primary visual cortex have properties such as sharp tuning for contour orientation, strong selectivity for motion direction, and insensitivity to stimulus polarity, that are not shared with their sub-cortical counterparts. Successful models have been developed for a number of these properties but in one case, direction selectivity, there is no consensus about underlying mechanisms. We here define a model that accounts for many of the empirical observations concerning direction selectivity. The model describes a single column of cat primary visual cortex and comprises a series of processing stages. Each neuron in the first cortical stage receives input from a small number of on-centre and off-centre relay cells in the lateral geniculate nucleus. Consistent with recent physiological evidence, the off-centre inputs to cortex precede the on-centre inputs by a small (∼4 ms) interval, and it is this difference that confers direction selectivity on model neurons. We show that the resulting model successfully matches the following empirical data: the proportion of cells that are direction selective; tilted spatiotemporal receptive fields; phase advance in the response to a stationary contrast-reversing grating stepped across the receptive field. The model also accounts for several other fundamental properties. Receptive fields have elongated subregions, orientation selectivity is strong, and the distribution of orientation tuning bandwidth across neurons is similar to that seen in the laboratory. Finally, neurons in the first stage have properties corresponding to simple cells, and more complex-like cells emerge in later stages. The results therefore show that a simple feed-forward model can account for a number of the fundamental properties of primary visual cortex. PMID:22496811
Emotional facilitation of sensory processing in the visual cortex.
Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2003-01-01
A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emotional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleasant pictures. Early selective encoding of pleasant and unpleasant images was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late positive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.
Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B
1995-01-01
The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258
Implicit short- and long-term memory direct our gaze in visual search.
Kruijne, Wouter; Meeter, Martijn
2016-04-01
Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
Chen, Zhongshan; Song, Yanping; Yao, Junping; Weng, Chuanhuang; Yin, Zheng Qin
2013-11-01
All know that retinitis pigmentosa (RP) is a group of hereditary retinal degenerative diseases characterized by progressive dysfunction of photoreceptors and associated with progressive cells loss; nevertheless, little is known about how rods and cones loss affects the surviving inner retinal neurons and networks. Retinal ganglion cells (RGCs) process and convey visual information from retina to visual centers in the brain. The healthy various ion channels determine the normal reception and projection of visual signals from RGCs. Previous work on the Royal College of Surgeons (RCS) rat, as a kind of classical RP animal model, indicated that, at late stages of retinal degeneration in RCS rat, RGCs were also morphologically and functionally affected. Here, retrograde labeling for RGCs with Fluorogold was performed to investigate the distribution, density, and morphological changes of RGCs during retinal degeneration. Then, patch clamp recording, western blot, and immunofluorescence staining were performed to study the channels of sodium and potassium properties of RGCs, so as to explore the molecular and proteinic basis for understanding the alterations of RGCs membrane properties and firing functions. We found that the resting membrane potential, input resistance, and capacitance of RGCs changed significantly at the late stage of retinal degeneration. Action potential could not be evoked in a part of RGCs. Inward sodium current and outward potassium current recording showed that sodium current was impaired severely but only slightly in potassium current. Expressions of sodium channel protein were impaired dramatically at the late stage of retinal degeneration. The results suggested that the density of RGCs decreased, process ramification impaired, and sodium ion channel proteins destructed, which led to the impairment of electrophysiological functions of RGCs and eventually resulted in the loss of visual function.
Effects of symbol type and numerical distance on the human event-related potential.
Jiang, Ting; Qiao, Sibing; Li, Jin; Cao, Zhongyu; Gao, Xuefei; Song, Yan; Xue, Gui; Dong, Qi; Chen, Chuansheng
2010-01-01
This study investigated the influence of the symbol type and numerical distance of numbers on the amplitudes and peak latencies of event-related potentials (ERPs). Our aim was to (1) determine the point in time of magnitude information access in visual number processing; and (2) identify at what stage the advantage of Arabic digits over Chinese verbal numbers occur. ERPs were recorded from 64 scalp sites while subjects (n=26) performed a classification task. Results showed that larger ERP amplitudes were elicited by numbers with distance-close condition in comparison to distance-far condition in the VPP component over centro-frontal sites. Furthermore, the VPP latency varied as a function of the symbol type, but the N170 did not. Such results demonstrate that magnitude information access takes place as early as 150 ms after onset of visual number stimuli and the advantage of Arabic digits over verbal numbers should be localized to the VPP component. We establish the VPP component as a critical ERP component to report in studies of numerical cognition and our results call into question the N170/VPP association hypothesis and the serial-stage model of visual number comparison processing.
Motion transparency: making models of motion perception transparent.
Snowden; Verstraten
1999-10-01
In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as 'filter selection' and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.
Effect of contrast on the perception of direction of a moving pattern
NASA Technical Reports Server (NTRS)
Stone, L. S.; Watson, A. B.; Mulligan, J. B.
1989-01-01
A series of experiments examining the effect of contrast on the perception of moving plaids was performed to test the hypothesis that the human visual system determines the direction of a moving plaid in a two-staged process: decomposition into component motion followed by application of the intersection-of-contraints rule. Although there is recent evidence that the first tenet of the hypothesis is correct, i.e., that plaid motion is initially decomposed into the motion of the individual grating components, the nature of the second-stage combination rule has not yet been established. It was found that when the gratings within the plaid are of different contrast the preceived direction is not predicted by the intersection-of-constraints rule. There is a strong (up to 20 deg) bias in the direction of the higher-constrast grating. A revised model, which incorporates a contrast-dependent weighting of perceived grating speed as observed for one-dimensional patterns, can quantitatively predict most of the results. The results are then discussed in the context of various models of human visual motion processing and of physiological responses of neurons in the primate visual system.
Electrophysiological evidence for parallel and serial processing during visual search.
Luck, S J; Hillyard, S A
1990-12-01
Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.
A graphic user interface for efficient 3D photo-reconstruction based on free software
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Gómez, Jose A.
2015-04-01
Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.
Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry
Jaworska, Katarzyna; Lages, Martin
2014-01-01
Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063
Stress Potentiates Early and Attenuates Late Stages of Visual Processing
2011-01-19
threat (M 6.5, SD 20.0) than during safety (M 19.3, SD 11.6), t(31) 6.7, p 0.001. They also expressed more intense negative emotion on their...threats increase risk assessment (Kava- liers and Choleris, 2001), and fearful facial expressions enhance sensory intake (Susskind et al., 2008). These...visual analog scales to rate the intensity of their emotional experience (anxious, happy, safe, or stressed) during safety and threat blocks. To minimize
The physiology and psychophysics of the color-form relationship: a review
Moutoussis, Konstantinos
2015-01-01
The relationship between color and form has been a long standing issue in visual science. A picture of functional segregation and topographic clustering emerges from anatomical and electrophysiological studies in animals, as well as by brain imaging studies in human. However, one of the many roles of chromatic information is to support form perception, and in some cases it can do so in a way superior to achromatic (luminance) information. This occurs both at an early, contour-detection stage, as well as in late, higher stages involving spatial integration and the perception of global shapes. Pure chromatic contrast can also support several visual illusions related to form-perception. On the other hand, form seems a necessary prerequisite for the computation and assignment of color across space, and there are several respects in which the color of an object can be influenced by its form. Evidently, color and form are mutually dependent. Electrophysiological studies have revealed neurons in the visual brain able to signal contours determined by pure chromatic contrast, the spatial tuning of which is similar to that of neurons carrying luminance information. It seems that, especially at an early stage, form is processed by several, independent systems that interact with each other, each one having different tuning characteristics in color space. At later processing stages, mechanisms able to combine information coming from different sources emerge. A clear interaction between color and form is manifested by the fact that color-form contingencies can be observed in various perceptual phenomena such as adaptation aftereffects and illusions. Such an interaction suggests a possible early binding between these two attributes, something that has been verified by both electrophysiological and fMRI studies. PMID:26578989
The physiology and psychophysics of the color-form relationship: a review.
Moutoussis, Konstantinos
2015-01-01
The relationship between color and form has been a long standing issue in visual science. A picture of functional segregation and topographic clustering emerges from anatomical and electrophysiological studies in animals, as well as by brain imaging studies in human. However, one of the many roles of chromatic information is to support form perception, and in some cases it can do so in a way superior to achromatic (luminance) information. This occurs both at an early, contour-detection stage, as well as in late, higher stages involving spatial integration and the perception of global shapes. Pure chromatic contrast can also support several visual illusions related to form-perception. On the other hand, form seems a necessary prerequisite for the computation and assignment of color across space, and there are several respects in which the color of an object can be influenced by its form. Evidently, color and form are mutually dependent. Electrophysiological studies have revealed neurons in the visual brain able to signal contours determined by pure chromatic contrast, the spatial tuning of which is similar to that of neurons carrying luminance information. It seems that, especially at an early stage, form is processed by several, independent systems that interact with each other, each one having different tuning characteristics in color space. At later processing stages, mechanisms able to combine information coming from different sources emerge. A clear interaction between color and form is manifested by the fact that color-form contingencies can be observed in various perceptual phenomena such as adaptation aftereffects and illusions. Such an interaction suggests a possible early binding between these two attributes, something that has been verified by both electrophysiological and fMRI studies.
Evaluating Alignment of Shapes by Ensemble Visualization
Raj, Mukund; Mirzargar, Mahsa; Preston, J. Samuel; Kirby, Robert M.; Whitaker, Ross T.
2016-01-01
The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. Although ensemble visualization for isosurfaces has been described in the literature, we conduct an expert-based evaluation of various ensemble visualization techniques in a particular medical imaging application: the construction of atlases or templates from a population of images. In this work, we extend contour boxplot to 3D, allowing us to evaluate it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders, namely examining the atlas image and the corresponding images/data provided as part of the construction process. We present feedback from domain experts on the efficacy of contour boxplot compared to other modalities when used as part of the atlas construction and analysis stages of their work. PMID:26186768
Integral modeling of human eyes: from anatomy to visual response
NASA Astrophysics Data System (ADS)
Navarro, Rafael
2006-02-01
Three basic stages towards the global modeling of the eye are presented. In the first stage, an adequate choice of the basis geometrical model, general ellipsoid in this case, permits, to fit in a natural way the typical "melon" shape of the cornea with minimum complexity. In addition it facilitates to extract most of its optically relevant parameters, such as the position and orientation of it optical axis in the 3D space, the paraxial and overall refractive power, the amount and axis of astigmatism, etc. In the second level, this geometrical model, along with optical design and optimization tools, is applied to build customized optical models of individual eyes, able to reproduce the measured wave aberration with high fidelity. Finally, we put together a sequence of schematic, but functionally realistic models of the different stages of image acquisition, coding and analysis in the visual system, along with a probabilistic Bayesian maximum a posteriori identification approach. This permitted us to build a realistic simulation of the all the essential processes involved in a visual acuity clinical exam. It is remarkable that at all three levels, it has been possible for the models to predict the experimental data with high accuracy.
Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina
Venkataramani, Sowmya
2016-01-01
Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. SIGNIFICANCE STATEMENT A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. PMID:26985041
Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina.
Venkataramani, Sowmya; Taylor, W Rowland
2016-03-16
Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. Copyright © 2016 the authors 0270-6474/16/363336-14$15.00/0.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
System of error detection in the manufacture of garments using artificial vision
NASA Astrophysics Data System (ADS)
Moreno, J. J.; Aguila, A.; Partida, E.; Martinez, C. L.; Morales, O.; Tejeida, R.
2017-12-01
A computer vision system is implemented to detect errors in the cutting stage within the manufacturing process of garments in the textile industry. It provides solution to errors within the process that cannot be easily detected by any employee, in addition to significantly increase the speed of quality review. In the textile industry as in many others, quality control is required in manufactured products and this has been carried out manually by means of visual inspection by employees over the years. For this reason, the objective of this project is to design a quality control system using computer vision to identify errors in the cutting stage within the garment manufacturing process to increase the productivity of textile processes by reducing costs.
ERIC Educational Resources Information Center
Borowsky, Ron; Besner, Derek
2006-01-01
D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…
Asymmetric Attention: Visualizing the Uncertain Threat
2010-03-01
memory . This is supportive of earlier research by Engle (2002) suggesting that executive attention and working memory capacity are...explored by Engle (2002). Engle’s findings suggest that attention or the executive function and working memory actually entail the same mental process ...recognition, and action. These skills orient and guide the Soldier in operational settings from the basic perceptual process at the attentiveness stage
Too little, too late: reduced visual span and speed characterize pure alexia.
Starrfelt, Randi; Habekost, Thomas; Leff, Alexander P
2009-12-01
Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected.
Too Little, Too Late: Reduced Visual Span and Speed Characterize Pure Alexia
Habekost, Thomas; Leff, Alexander P.
2009-01-01
Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected. PMID:19366870
The Timing of Visual Object Categorization
Mack, Michael L.; Palmeri, Thomas J.
2011-01-01
An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480
Rust, Nicole C.; DiCarlo, James J.
2012-01-01
While popular accounts suggest that neurons along the ventral visual processing stream become increasingly selective for particular objects, this appears at odds with the fact that inferior temporal cortical (IT) neurons are broadly tuned. To explore this apparent contradiction, we compared processing in two ventral stream stages (V4 and IT) in the rhesus macaque monkey. We confirmed that IT neurons are indeed more selective for conjunctions of visual features than V4 neurons, and that this increase in feature conjunction selectivity is accompanied by an increase in tolerance (“invariance”) to identity-preserving transformations (e.g. shifting, scaling) of those features. We report here that V4 and IT neurons are, on average, tightly matched in their tuning breadth for natural images (“sparseness”), and that the average V4 or IT neuron will produce a robust firing rate response (over 50% of its peak observed firing rate) to ~10% of all natural images. We also observed that sparseness was positively correlated with conjunction selectivity and negatively correlated with tolerance within both V4 and IT, consistent with selectivity-building and invariance-building computations that offset one another to produce sparseness. Our results imply that the conjunction-selectivity-building and invariance-building computations necessary to support object recognition are implemented in a balanced fashion to maintain sparseness at each stage of processing. PMID:22836252
Xie, Yuanjun; Feng, Zhengquan; Xu, Yuanyuan; Bian, Chen; Li, Min
2016-10-28
A putative functional role for alpha oscillations in working memory remains controversial. However, recent evidence suggests that such oscillation may reflect distinct phases of working memory processing. The present study investigated alpha band (8-13Hz) activity during the maintenance stage of working memory using a modified Sternberg working memory task. Our results reveal that alpha power was concentrated primarily in the occipital cortex and was decreased during the early stage of maintenance (0-600ms), and subsequently increased during the later stage of maintenance (1000-1600ms). We suggest that reduced alpha power may be involved in focused attention during the working memory maintenance, whereas increased alpha power may reflect suppression of visual stimuli to facilitate internal processing related to the task. This interpretation is generally consistent with recent reports suggesting that variations in alpha power are associated with the representation and processing of information in the discrete time intervals during the working memory maintenance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A real time ECG signal processing application for arrhythmia detection on portable devices
NASA Astrophysics Data System (ADS)
Georganis, A.; Doulgeraki, N.; Asvestas, P.
2017-11-01
Arrhythmia describes the disorders of normal heart rate, which, depending on the case, can even be fatal for a patient with severe history of heart disease. The purpose of this work is to develop an application for heart signal visualization, processing and analysis in Android portable devices e.g. Mobile phones, tablets, etc. The application is able to retrieve the signal initially from a file and at a later stage this signal is processed and analysed within the device so that it can be classified according to the features of the arrhythmia. In the processing and analysing stage, different algorithms are included among them the Moving Average and Pan Tompkins algorithm as well as the use of wavelets, in order to extract features and characteristics. At the final stage, testing is performed by simulating our application in real-time records, using the TCP network protocol for communicating the mobile with a simulated signal source. The classification of ECG beat to be processed is performed by neural networks.
Hillyard, S A; Vogel, E K; Luck, S J
1998-01-01
Both physiological and behavioral studies have suggested that stimulus-driven neural activity in the sensory pathways can be modulated in amplitude during selective attention. Recordings of event-related brain potentials indicate that such sensory gain control or amplification processes play an important role in visual-spatial attention. Combined event-related brain potential and neuroimaging experiments provide strong evidence that attentional gain control operates at an early stage of visual processing in extrastriate cortical areas. These data support early selection theories of attention and provide a basis for distinguishing between separate mechanisms of attentional suppression (of unattended inputs) and attentional facilitation (of attended inputs). PMID:9770220
Walter, Sabrina; Keitel, Christian; Müller, Matthias M
2016-01-01
Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.
Design of penicillin fermentation process simulation system
NASA Astrophysics Data System (ADS)
Qi, Xiaoyu; Yuan, Zhonghu; Qi, Xiaoxuan; Zhang, Wenqi
2011-10-01
Real-time monitoring for batch process attracts increasing attention. It can ensure safety and provide products with consistent quality. The design of simulation system of batch process fault diagnosis is of great significance. In this paper, penicillin fermentation, a typical non-linear, dynamic, multi-stage batch production process, is taken as the research object. A visual human-machine interactive simulation software system based on Windows operation system is developed. The simulation system can provide an effective platform for the research of batch process fault diagnosis.
Vistoli, Damien; Achim, Amélie M; Lavoie, Marie-Audrey; Jackson, Philip L
2016-05-01
Empathy refers to our capacity to share and understand the emotional states of others. It relies on two main processes according to existing models: an effortless affective sharing process based on neural resonance and a more effortful cognitive perspective-taking process enabling the ability to imagine and understand how others feel in specific situations. Until now, studies have focused on factors influencing the affective sharing process but little is known about those influencing the cognitive perspective-taking process and the related brain activations during vicarious pain. In the present fMRI study, we used the well-known physical pain observation task to examine whether the visual perspective can influence, in a bottom-up way, the brain regions involved in taking others' cognitive perspective to attribute their level of pain. We used a pseudo-dynamic version of this classic task which features hands in painful or neutral daily life situations while orthogonally manipulating: (1) the visual perspective with which hands were presented (first-person versus third-person conditions) and (2) the explicit instructions to imagine oneself or an unknown person in those situations (Self versus Other conditions). The cognitive perspective-taking process was investigated by comparing Other and Self conditions. When examined across both visual perspectives, this comparison showed no supra-threshold activation. Instead, the Other versus Self comparison led to a specific recruitment of the bilateral temporo-parietal junction when hands were presented according to a first-person (but not third-person) visual perspective. The present findings identify the visual perspective as a factor that modulates the neural activations related to cognitive perspective-taking during vicarious pain and show that this complex cognitive process can be influenced by perceptual stages of information processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Visual Outcomes of Macular Hole Surgery.
Khaqan, Hussain Ahmad; Lubna; Jameel, Farrukh; Muhammad
2016-10-01
To determine the mean visual improvement after internal limiting membrane (ILM) peeling assisted with brilliant blue staining of ILM in macular hole, and stratify the mean visual improvement in different stages of macular hole. Quasi-experimental study. Eye outpatient department (OPD), Lahore General Hospital, Lahore from October 2013 to December 2014. Patients with macular hole underwent measurement of best corrected visual acuity (BCVA) and fundus examination with indirect slit lamp biomicroscopy before surgery. The diagnosis of all patients was confirmed on optical coherence tomography. All patients had 23G trans-conjunctival three ports pars plana vitrectomy, ILM peeling, and endotamponade of SF6. The mean visual improvement of different stages of macular hole was noted. Paired t-test was applied. There were 30 patients, 15 males and 15 females (50%). The mean age was 62 ±10.95 years. They presented with low mean preoperative visual acuity (VA) of 0.96 ±0.11 logMar. The mean postoperative VAwas 0.63 ±0.24 logMar. The mean visual increase was 0.33 ±0.22 logMar (p < 0.001). In patients with stage 2 macular hole, mean visual increase was 0.35 ±0.20 logMar (p < 0.001). In patients with stage 3 macular hole, mean visual increase was 0.44 ±0.21 logMar (p < 0.001), and in patients with stage 4 macular hole it was 0.13 ± 0.1 logMar (p = 0.004). ILM peeling assisted with brilliant blue is a promising surgery for those patients who have decreased vision due to macular hole, in 2 - 4 stages of macular hole.
Vindrola-Padros, Cecilia; Martins, Ana; Coyne, Imelda; Bryan, Gemma; Gibson, Faith
2016-01-01
Research with young people suffering from a long-term illness has more recently incorporated the use of visual methods to foster engagement of research participants from a wide age range, capture the longitudinal and complex factors involved in young people's experiences of care, and allow young people to express their views in multiple ways. Despite its contributions, these methods are not always easy to implement and there is a possibility that they might not generate the results or engagement initially anticipated by researchers. We hope to expand on the emerging discussion on the use of participatory visual methods by presenting the practical issues we have faced while using this methodology during different stages of research: informed assent/consent, data collection, and the dissemination of findings. We propose a combination of techniques to make sure that the research design is flexible enough to allow research participants to shape the research process according to their needs and interests.
Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).
Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen
2018-06-06
Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai
2018-05-01
The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
The ventral visual pathway: an expanded neural framework for the processing of object quality.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer
2013-01-01
Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. Published by Elsevier Ltd.
Influence of early attentional modulation on working memory
Gazzaley, Adam
2011-01-01
It is now established that attention influences working memory (WM) at multiple processing stages. This liaison between attention and WM poses several interesting empirical questions. Notably, does attention impact WM via its influences on early perceptual processing? If so, what are the critical factors at play in this attention-perception-WM interaction. I review recent data from our laboratory utilizing a variety of techniques (electroencephalography (EEG), functional MRI (fMRI) and transcranial magnetic stimulation (TMS)), stimuli (features and complex objects), novel experimental paradigms, and research populations (younger and older adults), which converge to support the conclusion that top-down modulation of visual cortical activity at early perceptual processing stages (100–200 ms after stimulus onset) impacts subsequent WM performance. Factors that affect attentional control at this stage include cognitive load, task practice, perceptual training, and aging. These developments highlight the complex and dynamic relationships among perception, attention, and memory. PMID:21184764
Anatomy and physiology of the afferent visual system.
Prasad, Sashank; Galetta, Steven L
2011-01-01
The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Visual Modelling of Data Warehousing Flows with UML Profiles
NASA Astrophysics Data System (ADS)
Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan
Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.
Flow visualization for investigating stator losses in a multistage axial compressor
NASA Astrophysics Data System (ADS)
Smith, Natalie R.; Key, Nicole L.
2015-05-01
The methodology and implementation of a powder-paint-based flow visualization technique along with the illuminated flow physics are presented in detail for application in a three-stage axial compressor. While flow visualization often accompanies detailed studies, the turbomachinery literature lacks a comprehensive study which both utilizes flow visualization to interrupt the flow field and explains the intricacies of execution. Lessons learned for obtaining high-quality images of surface flow patterns are discussed in this study. Fluorescent paint is used to provide clear, high-contrast pictures of the recirculation regions on shrouded vane rows. An edge-finding image processing procedure is implemented to provide a quantitative measure of vane-to-vane variability in flow separation, which is approximately 7 % of the suction surface length for Stator 1. Results include images of vane suction side corner separations from all three stages at three loading conditions. Additionally, streakline patterns obtained experimentally are compared with those calculated from computational models. Flow physics associated with vane clocking and increased rotor tip clearance and their implications to stator loss are also investigated with this flow visualization technique. With increased rotor tip clearance, the vane surface flow patterns show a shift to larger separations and more radial flow at the tip. Finally, the effects of instrumentation on the flow field are highlighted.
A tone mapping operator based on neural and psychophysical models of visual perception
NASA Astrophysics Data System (ADS)
Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier
2015-03-01
High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.
Interactive Medical Volume Visualization for Surgical Operations
2001-10-25
the preprocessing and processing stages, related medical brain tissues, which are skull, white matter, gray matter and pathology ( tumor ), are segmented ...from 12 or 16 bit data depths. NMR segmentation plays an important role in our work, because, classifying brain tissues from NMR slices requires an...performing segmentation of brain structures. Our segmentation process uses Self Organizing Feature Maps (SOFM) [12]. In SOM, on the contrary to Feedback
ERIC Educational Resources Information Center
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.
2009-01-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…
Metabolic alterations in patients with Parkinson disease and visual hallucinations.
Boecker, Henning; Ceballos-Baumann, Andres O; Volk, Dominik; Conrad, Bastian; Forstl, Hans; Haussermann, Peter
2007-07-01
Visual hallucinations (VHs) occur frequently in advanced stages of Parkinson disease (PD). Which brain regions are affected in PD with VH is not well understood. To characterize the pattern of affected brain regions in PD with VH and to determine whether functional changes in PD with VH occur preferentially in visual association areas, as is suggested by the complex clinical symptomatology. Positron emission tomography measurements using fluorodeoxyglucose F 18. Between-group statistical analysis, accounting for the variance related to disease stage. University hospital. Patients Eight patients with PD and VH and 11 patients with PD without VH were analyzed. The presence of VH during the month before positron emission tomography was rated using the Neuropsychiatric Inventory subscale for VH (PD and VH, 4.63; PD without VH, 0.00; P < .002). Parkinson disease with VH, compared with PD without VH, was characterized by reduction in the regional cerebral metabolic rate for glucose consumption (P < .05, corrected for false discovery rate) in occipitotemporoparietal regions, sparing the occipital pole. No significant increase in regional glucose metabolism was detected in patients with PD and VH. The pattern of resting-state metabolic changes in regions of the dorsal and ventral visual streams, but not in primary visual cortex, in patients with PD and VH, is compatible with the functional roles of visual association areas in higher-order visual processing. These findings may help to further elucidate the functional mechanisms underlying VH in PD.
Spering, Miriam; Montagnini, Anna
2011-04-22
Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.
Visual Processing in Rapid-Chase Systems: Image Processing, Attention, and Awareness
Schmidt, Thomas; Haberkamp, Anke; Veltkamp, G. Marina; Weber, Andreas; Seydell-Greenwald, Anna; Schmidt, Filipp
2011-01-01
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness. PMID:21811484
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.
Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried
2014-02-01
In a previous study, we showed that virtually simulated social group pressure could influence early stages of perception after only 100 ms. In the present EEG study, we investigated the influence of social pressure on visual perception in participants with high (HA) and low (LA) levels of autonomy. Ten HA and ten LA individuals were asked to accomplish a visual discrimination task in an adapted paradigm of Solomon Asch. Results indicate that LA participants adapted to the incorrect group opinion more often than HA participants (42% vs. 30% of the trials, respectively). LA participants showed a larger posterior P1 component contralateral to targets presented in the right visual field when conforming to the correct compared to conforming to the incorrect group decision. In conclusion, our ERP data suggest that the group context can have early effects on our perception rather than on conscious decision processes in LA, but not HA participants. Copyright © 2013 Society for Psychophysiological Research.
Object-based spatial attention when objects have sufficient depth cues.
Takeya, Ryuji; Kasai, Tetsuko
2015-01-01
Attention directed to a part of an object tends to obligatorily spread over all of the spatial regions that belong to the object, which may be critical for rapid object-recognition in cluttered visual scenes. Previous studies have generally used simple rectangles as objects and have shown that attention spreading is reflected by amplitude modulation in the posterior N1 component (150-200 ms poststimulus) of event-related potentials, while other interpretations (i.e., rectangular holes) may arise implicitly in early visual processing stages. By using modified Kanizsa-type stimuli that provided less ambiguity of depth ordering, the present study examined early event-related potential spatial-attention effects for connected and separated objects, both of which were perceived in front of (Experiment 1) and in back of (Experiment 2) the surroundings. Typical P1 (100-140 ms) and N1 (150-220 ms) attention effects of ERP in response to unilateral probes were observed in both experiments. Importantly, the P1 attention effect was decreased for connected objects compared to separated objects only in Experiment 1, and the typical object-based modulations of N1 were not observed in either experiment. These results suggest that spatial attention spreads over a figural object at earlier stages of processing than previously indicated, in three-dimensional visual scenes with multiple depth cues.
The locus of impairment in English developmental letter position dyslexia
Kezilas, Yvette; Kohnen, Saskia; McKague, Meredith; Castles, Anne
2014-01-01
Many children with reading difficulties display phonological deficits and struggle to acquire non-lexical reading skills. However, not all children with reading difficulties have these problems, such as children with selective letter position dyslexia (LPD), who make excessive migration errors (such as reading slime as “smile”). Previous research has explored three possible loci for the deficit – the phonological output buffer, the orthographic input lexicon, and the orthographic-visual analysis stage of reading. While there is compelling evidence against a phonological output buffer and orthographic input lexicon deficit account of English LPD, the evidence in support of an orthographic-visual analysis deficit is currently limited. In this multiple single-case study with three English-speaking children with developmental LPD, we aimed to both replicate and extend previous findings regarding the locus of impairment in English LPD. First, we ruled out a phonological output buffer and an orthographic input lexicon deficit by administering tasks that directly assess phonological processing and lexical guessing. We then went on to directly assess whether or not children with LPD have an orthographic-visual analysis deficit by modifying two tasks that have previously been used to localize processing at this level: a same-different decision task and a non-word reading task. The results from these tasks indicate that LPD is most likely caused by a deficit specific to the coding of letter positions at the orthographic-visual analysis stage of reading. These findings provide further evidence for the heterogeneity of dyslexia and its underlying causes. PMID:24917802
Early access to abstract representations in developing readers: Evidence from masked priming
Perea, Manuel; Abu Mallouh, Reem; Carreiras, Manuel
2013-01-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing –as measured by masked priming– in young children (3rd and 6th graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early moments of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word’s letters) as the target word (e.g., - [ktzb-ktAb] –note that the three initial letters are connected in prime and target) than for those that do not ( [ktxb-ktAb]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g., ) was remarkably similar for both types of primes. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. PMID:23786474
A Phenomenological Investigation of Master's-Level Counselor Research Identity Development Stages
ERIC Educational Resources Information Center
Jorgensen, Maribeth F.; Duncan, Kelly
2015-01-01
This study explored counselor research identity, an aspect of professional identity, in master's-level counseling students. Twelve students participated in individual interviews; six of the participants were involved in a focus group interview and visual representation process. The three data sources supported the emergence of five themes. The…
Prosodic Encoding in Silent Reading.
ERIC Educational Resources Information Center
Wilkenfeld, Deborah
In silent reading, short-memory tasks, such as semantic and syntactic processing, require a stage of phonetic encoding between visual representation and the actual extraction of meaning, and this encoding includes prosodic as well as segmental features. To test for this suprasegmental coding, an experiment was conducted in which subjects were…
Early, Equivalent ERP Masked Priming Effects for Regular and Irregular Morphology
ERIC Educational Resources Information Center
Morris, Joanna; Stockall, Linnaea
2012-01-01
Converging evidence from behavioral masked priming (Rastle & Davis, 2008), EEG masked priming (Morris, Frank, Grainger, & Holcomb, 2007) and single word MEG (Zweig & Pylkkanen, 2008) experiments has provided robust support for a model of lexical processing which includes an early, automatic, visual word form based stage of morphological parsing…
Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data
2015-04-01
working with Intermountain Healthcare on a new rich dataset extracted directly from medical notes using natural language processing ( NLP ) algorithms...probabilities based on a state- of-the-art NLP classifiers. At that stage the data did not include geographic information or temporal information but we
Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu
2013-09-01
Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.
Pitting temporal against spatial integration in schizophrenic patients.
Herzog, Michael H; Brand, Andreas
2009-06-30
Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.
Mountain building processes in the Central Andes
NASA Technical Reports Server (NTRS)
Bloom, A. L.; Isacks, B. L.
1986-01-01
False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.
Mountain building processes in the Central Andes
NASA Astrophysics Data System (ADS)
Bloom, A. L.; Isacks, B. L.
False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.
A complex noise reduction method for improving visualization of SD-OCT skin biomedical images
NASA Astrophysics Data System (ADS)
Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Khramov, Alexander G.
2014-05-01
In this paper we consider the original method of solving noise reduction problem for visualization's quality improvement of SD-OCT skin and tumors biomedical images. The principal advantages of OCT are high resolution and possibility of in vivo analysis. We propose a two-stage algorithm: 1) process of raw one-dimensional A-scans of SD-OCT and 2) remove a noise from the resulting B(C)-scans. The general mathematical methods of SD-OCT are unstable: if the noise of the CCD is 1.6% of the dynamic range then result distortions are already 25-40% of the dynamic range. We use at the first stage a resampling of A-scans and simple linear filters to reduce the amount of data and remove the noise of the CCD camera. The efficiency, improving productivity and conservation of the axial resolution when using this approach are showed. At the second stage we use an effective algorithms based on Hilbert-Huang Transform for more accurately noise peaks removal. The effectiveness of the proposed approach for visualization of malignant and benign skin tumors (melanoma, BCC etc.) and a significant improvement of SNR level for different methods of noise reduction are showed. Also in this study we consider a modification of this method depending of a specific hardware and software features of used OCT setup. The basic version does not require any hardware modifications of existing equipment. The effectiveness of proposed method for 3D visualization of tissues can simplify medical diagnosis in oncology.
Cumulative latency advance underlies fast visual processing in desynchronized brain state
Wang, Xu-dong; Chen, Cheng; Zhang, Dinghong; Yao, Haishan
2014-01-01
Fast sensory processing is vital for the animal to efficiently respond to the changing environment. This is usually achieved when the animal is vigilant, as reflected by cortical desynchronization. However, the neural substrate for such fast processing remains unclear. Here, we report that neurons in rat primary visual cortex (V1) exhibited shorter response latency in the desynchronized state than in the synchronized state. In vivo whole-cell recording from the same V1 neurons undergoing the two states showed that both the resting and visually evoked conductances were higher in the desynchronized state. Such conductance increases of single V1 neurons shorten the response latency by elevating the membrane potential closer to the firing threshold and reducing the membrane time constant, but the effects only account for a small fraction of the observed latency advance. Simultaneous recordings in lateral geniculate nucleus (LGN) and V1 revealed that LGN neurons also exhibited latency advance, with a degree smaller than that of V1 neurons. Furthermore, latency advance in V1 increased across successive cortical layers. Thus, latency advance accumulates along various stages of the visual pathway, likely due to a global increase of membrane conductance in the desynchronized state. This cumulative effect may lead to a dramatic shortening of response latency for neurons in higher visual cortex and play a critical role in fast processing for vigilant animals. PMID:24347634
Bolduc-Teasdale, Julie; Jolicoeur, Pierre; McKerral, Michelle
2012-01-01
Individuals who have sustained a mild brain injury (e.g., mild traumatic brain injury or mild cerebrovascular stroke) are at risk to show persistent cognitive symptoms (attention and memory) after the acute postinjury phase. Although studies have shown that those patients perform normally on neuropsychological tests, cognitive symptoms remain present, and there is a need for more precise diagnostic tools. The aim of this study was to develop precise and sensitive markers for the diagnosis of post brain injury deficits in visual and attentional functions which could be easily translated in a clinical setting. Using electrophysiology, we have developed a task that allows the tracking of the processes involved in the deployment of visual spatial attention from early stages of visual treatment (N1, P1, N2, and P2) to higher levels of cognitive processing (no-go N2, P3a, P3b, N2pc, SPCN). This study presents a description of this protocol and its validation in 19 normal participants. Results indicated the statistically significant presence of all ERPs aimed to be elicited by this novel task. This task could allow clinicians to track the recovery of the mechanisms involved in the deployment of visual-attentional processing, contributing to better diagnosis and treatment management for persons who suffer a brain injury. PMID:23227309
Cumulative latency advance underlies fast visual processing in desynchronized brain state.
Wang, Xu-dong; Chen, Cheng; Zhang, Dinghong; Yao, Haishan
2014-01-07
Fast sensory processing is vital for the animal to efficiently respond to the changing environment. This is usually achieved when the animal is vigilant, as reflected by cortical desynchronization. However, the neural substrate for such fast processing remains unclear. Here, we report that neurons in rat primary visual cortex (V1) exhibited shorter response latency in the desynchronized state than in the synchronized state. In vivo whole-cell recording from the same V1 neurons undergoing the two states showed that both the resting and visually evoked conductances were higher in the desynchronized state. Such conductance increases of single V1 neurons shorten the response latency by elevating the membrane potential closer to the firing threshold and reducing the membrane time constant, but the effects only account for a small fraction of the observed latency advance. Simultaneous recordings in lateral geniculate nucleus (LGN) and V1 revealed that LGN neurons also exhibited latency advance, with a degree smaller than that of V1 neurons. Furthermore, latency advance in V1 increased across successive cortical layers. Thus, latency advance accumulates along various stages of the visual pathway, likely due to a global increase of membrane conductance in the desynchronized state. This cumulative effect may lead to a dramatic shortening of response latency for neurons in higher visual cortex and play a critical role in fast processing for vigilant animals.
Cognitive and artificial representations in handwriting recognition
NASA Astrophysics Data System (ADS)
Lenaghan, Andrew P.; Malyan, Ron
1996-03-01
Both cognitive processes and artificial recognition systems may be characterized by the forms of representation they build and manipulate. This paper looks at how handwriting is represented in current recognition systems and the psychological evidence for its representation in the cognitive processes responsible for reading. Empirical psychological work on feature extraction in early visual processing is surveyed to show that a sound psychological basis for feature extraction exists and to describe the features this approach leads to. The first stage of the development of an architecture for a handwriting recognition system which has been strongly influenced by the psychological evidence for the cognitive processes and representations used in early visual processing, is reported. This architecture builds a number of parallel low level feature maps from raw data. These feature maps are thresholded and a region labeling algorithm is used to generate sets of features. Fuzzy logic is used to quantify the uncertainty in the presence of individual features.
Automated Extraction of Flow Features
NASA Technical Reports Server (NTRS)
Dorney, Suzanne (Technical Monitor); Haimes, Robert
2005-01-01
Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.
Automated Extraction of Flow Features
NASA Technical Reports Server (NTRS)
Dorney, Suzanne (Technical Monitor); Haimes, Robert
2004-01-01
Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.
First-Pass Processing of Value Cues in the Ventral Visual Pathway.
Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E
2018-02-19
Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.
Töllner, Thomas; Müller, Hermann J; Zehetleitner, Michael
2012-07-01
Visual search for feature singletons is slowed when a task-irrelevant, but more salient distracter singleton is concurrently presented. While there is a consensus that this distracter interference effect can be influenced by internal system settings, it remains controversial at what stage of processing this influence starts to affect visual coding. Advocates of the "stimulus-driven" view maintain that the initial sweep of visual processing is entirely driven by physical stimulus attributes and that top-down settings can bias visual processing only after selection of the most salient item. By contrast, opponents argue that top-down expectancies can alter the initial selection priority, so that focal attention is "not automatically" shifted to the location exhibiting the highest feature contrast. To precisely trace the allocation of focal attention, we analyzed the Posterior-Contralateral-Negativity (PCN) in a task in which the likelihood (expectancy) with which a distracter occurred was systematically varied. Our results show that both high (vs. low) distracter expectancy and experiencing a distracter on the previous trial speed up the timing of the target-elicited PCN. Importantly, there was no distracter-elicited PCN, indicating that participants did not shift attention to the distracter before selecting the target. This pattern unambiguously demonstrates that preattentive vision is top-down modifiable.
Applying Strategic Visualization(Registered Trademark) to Lunar and Planetary Mission Design
NASA Technical Reports Server (NTRS)
Frassanito, John R.; Cooke, D. R.
2002-01-01
NASA teams, such as the NASA Exploration Team (NEXT), utilize advanced computational visualization processes to develop mission designs and architectures for lunar and planetary missions. One such process, Strategic Visualization (trademark), is a tool used extensively to help mission designers visualize various design alternatives and present them to other participants of their team. The participants, which may include NASA, industry, and the academic community, are distributed within a virtual network. Consequently, computer animation and other digital techniques provide an efficient means to communicate top-level technical information among team members. Today,Strategic Visualization(trademark) is used extensively both in the mission design process within the technical community, and to communicate the value of space exploration to the general public. Movies and digital images have been generated and shown on nationally broadcast television and the Internet, as well as in magazines and digital media. In our presentation will show excerpts of a computer-generated animation depicting the reference Earth/Moon L1 Libration Point Gateway architecture. The Gateway serves as a staging corridor for human expeditions to the lunar poles and other surface locations. Also shown are crew transfer systems and current reference lunar excursion vehicles as well as the Human and robotic construction of an inflatable telescope array for deployment to the Sun/Earth Libration Point.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2011-01-01
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
Inefficient conjunction search made efficient by concurrent spoken delivery of target identity.
Reali, Florencia; Spivey, Michael J; Tyler, Melinda J; Terranova, Joseph
2006-08-01
Visual search based on a conjunction of two features typically elicits reaction times that increase linearly as a function of the number of distractors, whereas search based on a single feature is essentially unaffected by set size. These and related findings have often been interpreted as evidence of a serial search stage that follows a parallel search stage. However, a wide range of studies has been showing a form of blending of these two processes. For example, when a spoken instruction identifies the conjunction target concurrently with the visual display, the effect of set size is significantly reduced, suggesting that incremental linguistic processing of the first feature adjective and then the second feature adjective may facilitate something approximating a parallel extraction of objects during search for the target. Here, we extend these results to a variety of experimental designs. First, we replicate the result with a mixed-trials design (ruling out potential strategies associated with the blocked design of the original study). Second, in a mixed-trials experiment, the order of adjective types in the spoken query varies randomly across conditions. In a third experiment, we extend the effect to a triple-conjunction search task. A fourth (control) experiment demonstrates that these effects are not due to an efficient odd-one-out search that ignores the linguistic input. This series of experiments, along with attractor-network simulations of the phenomena, provide further evidence toward understanding linguistically mediated influences in real-time visual search processing.
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
Schindler, Sebastian; Kissler, Johanna
2016-10-01
Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Emotion Separation Is Completed Early and It Depends on Visual Field Presentation
Liu, Lichan; Ioannides, Andreas A.
2010-01-01
It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly. PMID:20339549
The ventral visual pathway: An expanded neural framework for the processing of object quality
Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Ungerleider, Leslie G.; Mishkin, Mortimer
2012-01-01
Since the original characterization of the ventral visual pathway our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d’etre for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy that culminates in singular object representations for utilization mainly by ventrolateral prefrontal cortex and, more parsimoniously than this account, incorporates attentional, contextual, and feedback effects. PMID:23265839
Costa, Thiago L; Costa, Marcelo F; Magalhães, Adsson; Rêgo, Gabriel G; Nagy, Balázs V; Boggio, Paulo S; Ventura, Dora F
2015-02-19
Recent research suggests that V1 plays an active role in the judgment of size and distance. Nevertheless, no research has been performed using direct brain stimulation to address this issue. We used transcranial direct-current stimulation (tDCS) to directly modulate the early stages of cortical visual processing while measuring size and distance perception with a psychophysical scaling method of magnitude estimation in a repeated-measures design. The subjects randomly received anodal, cathodal, and sham tDCS in separate sessions starting with size or distance judgment tasks. Power functions were fit to the size judgment data, whereas logarithmic functions were fit to distance judgment data. Slopes and R(2) were compared with separate repeated-measures analyses of variance with two factors: task (size vs. distance) and tDCS (anodal vs. cathodal vs. sham). Anodal tDCS significantly decreased slopes, apparently interfering with size perception. No effects were found for distance perception. Consistent with previous studies, the results of the size task appeared to reflect a prothetic continuum, whereas the results of the distance task seemed to reflect a metathetic continuum. The differential effects of tDCS on these tasks may support the hypothesis that different physiological mechanisms underlie judgments on these two continua. The results further suggest the complex involvement of the early visual cortex in size judgment tasks that go beyond the simple representation of low-level stimulus properties. This supports predictive coding models and experimental findings that suggest that higher-order visual areas may inhibit incoming information from the early visual cortex through feedback connections when complex tasks are performed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A Contribution for the Automatic Sleep Classification Based on the Itakura-Saito Spectral Distance
NASA Astrophysics Data System (ADS)
Cardoso, Eduardo; Batista, Arnaldo; Rodrigues, Rui; Ortigueira, Manuel; Bárbara, Cristina; Martinho, Cristina; Rato, Raul
Sleep staging is a crucial step before the scoring the sleep apnoea, in subjects that are tested for this condition. These patients undergo a whole night polysomnography recording that includes EEG, EOG, ECG, EMG and respiratory signals. Sleep staging refers to the quantification of its depth. Despite the commercial sleep software being able to stage the sleep, there is a general lack of confidence amongst health practitioners of these machine results. Generally the sleep scoring is done over the visual inspection of the overnight patient EEG recording, which takes the attention of an expert medical practitioner over a couple of hours. This contributes to a waiting list of two years for patients of the Portuguese Health Service. In this work we have used a spectral comparison method called Itakura distance to be able to make a distinction between sleepy and awake epochs in a night EEG recording, therefore automatically doing the staging. We have used the data from 20 patients of Hospital Pulido Valente, which had been previously visually expert scored. Our technique results were promising, in a way that Itakura distance can, by itself, distinguish with a good degree of certainty the N2, N3 and awake states. Pre-processing stages for artefact reduction and baseline removal using Wavelets were applied.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
2012-01-01
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038
Dental care protocol based on visual supports for children with autism spectrum disorders
Mastroberardino, Stefano; Campus, Guglielmo; Olivari, Benedetta; Faggioli, Raffaella; Lenti, Carlo; Strohmenger, Laura
2015-01-01
Background Subjects with Autism Spectrum Disorders (ASDs) have often difficulties to accept dental treatments. The aim of this study is to propose a dental care protocol based on visual supports to facilitate children with ASDs to undergo to oral examination and treatments. Material and Methods 83 children (age range 6-12 years) with a signed consent form were enrolled; intellectual level, verbal fluency and cooperation grade were evaluated. Children were introduced into a four stages path in order to undergo: an oral examination (stage 1), a professional oral hygiene session (stage 2), sealants (stage 3), and, if necessary, a restorative treatment (stage 4). Each stage came after a visual training, performed by a psychologist (stage 1) and by parents at home (stages 2, 3 and 4). Association between acceptance rates at each stage and gender, intellectual level, verbal fluency and cooperation grade was tested with chi-square test if appropriate. Results Seventy-seven (92.8%) subjects overcame both stage 1 and 2. Six (7.2%) refused stage 3 and among the 44 subjects who need restorative treatments, only three refused it. The acceptance rate at each stage was statistically significant associated to the verbal fluency (p=0.02; p=0.04; p=0.01, respectively for stage 1, 3 and 4). In stage 2 all subjects accepted to move to the next stage. The verbal/intellectual/cooperation dummy variable was statistically associated to the acceptance rate (p<0.01). Conclusions The use of visual supports has shown to be able to facilitate children with ASDs to undergo dental treatments even in non-verbal children with a low intellectual level, underlining that behavioural approach should be used as the first strategy to treat patients with ASDs in dental setting. Key words:Autism spectrum disorders, behaviour management, paediatric dentistry, visual learning methods. PMID:26241453
Video compression via log polar mapping
NASA Astrophysics Data System (ADS)
Weiman, Carl F. R.
1990-09-01
A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.
[Research advances on cortical functional and structural deficits of amblyopia].
Wu, Y; Liu, L Q
2017-05-11
Previous studies have observed functional deficits in primary visual cortex. With the development of functional magnetic resonance imaging and electrophysiological technique, the research of the striate, extra-striate cortex and higher-order cortical deficit underlying amblyopia reaches a new stage. The neural mechanisms of amblyopia show that anomalous responses exist throughout the visual processing hierarchy, including the functional and structural abnormalities. This review aims to summarize the current knowledge about structural and functional deficits of brain regions associated with amblyopia. (Chin J Ophthalmol, 2017, 53: 392 - 395) .
Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data.
Tomescu, Oana A; Mattanovich, Diethard; Thallinger, Gerhard G
2014-01-01
Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study.
Integrative omics analysis. A study based on Plasmodium falciparum mRNA and protein data
2014-01-01
Background Technological improvements have shifted the focus from data generation to data analysis. The availability of large amounts of data from transcriptomics, protemics and metabolomics experiments raise new questions concerning suitable integrative analysis methods. We compare three integrative analysis techniques (co-inertia analysis, generalized singular value decomposition and integrative biclustering) by applying them to gene and protein abundance data from the six life cycle stages of Plasmodium falciparum. Co-inertia analysis is an analysis method used to visualize and explore gene and protein data. The generalized singular value decomposition has shown its potential in the analysis of two transcriptome data sets. Integrative Biclustering applies biclustering to gene and protein data. Results Using CIA, we visualize the six life cycle stages of Plasmodium falciparum, as well as GO terms in a 2D plane and interpret the spatial configuration. With GSVD, we decompose the transcriptomic and proteomic data sets into matrices with biologically meaningful interpretations and explore the processes captured by the data sets. IBC identifies groups of genes, proteins, GO Terms and life cycle stages of Plasmodium falciparum. We show method-specific results as well as a network view of the life cycle stages based on the results common to all three methods. Additionally, by combining the results of the three methods, we create a three-fold validated network of life cycle stage specific GO terms: Sporozoites are associated with transcription and transport; merozoites with entry into host cell as well as biosynthetic and metabolic processes; rings with oxidation-reduction processes; trophozoites with glycolysis and energy production; schizonts with antigenic variation and immune response; gametocyctes with DNA packaging and mitochondrial transport. Furthermore, the network connectivity underlines the separation of the intraerythrocytic cycle from the gametocyte and sporozoite stages. Conclusion Using integrative analysis techniques, we can integrate knowledge from different levels and obtain a wider view of the system under study. The overlap between method-specific and common results is considerable, even if the basic mathematical assumptions are very different. The three-fold validated network of life cycle stage characteristics of Plasmodium falciparum could identify a large amount of the known associations from literature in only one study. PMID:25033389
Tracking moving identities: after attending the right location, the identity does not come for free.
Pinto, Yaïr; Scholte, H Steven; Lamme, V A F
2012-01-01
Although tracking identical moving objects has been studied since the 1980's, only recently the study into tracking moving objects with distinct identities has started (referred to as Multiple Identity Tracking, MIT). So far, only behavioral studies into MIT have been undertaken. These studies have left a fundamental question regarding MIT unanswered, is MIT a one-stage or a two-stage process? According to the one-stage model, after a location has been attended, the identity is released without effort. However, according to the two-stage model, there are two effortful stages in MIT, attending to a location, and attending to the identity of the object at that location. In the current study we investigated this question by measuring brain activity in response to tracking familiar and unfamiliar targets. Familiarity is known to automate effortful processes, so if attention to identify the object is needed, this should become easier. However, if no such attention is needed, familiarity can only affect other processes (such as memory for the target set). Our results revealed that on unfamiliar trials neural activity was higher in both attentional networks, and visual identification networks. These results suggest that familiarity in MIT automates attentional identification processes, thus suggesting that attentional identification is needed in MIT. This then would imply that MIT is essentially a two-stage process, since after attending the location, the identity does not seem to come for free.
Odours reduce the magnitude of object substitution masking for matching visual targets in females.
Robinson, Amanda K; Laning, Julia; Reinhard, Judith; Mattingley, Jason B
2016-08-01
Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.
Identity-expression interaction in face perception: sex, visual field, and psychophysical factors.
Godard, Ornella; Baudouin, Jean-Yves; Bonnet, Philippe; Fiori, Nicole
2013-01-01
We investigated the psychophysical factors underlying the identity-emotion interaction in face perception. Visual field and sex were also taken into account. Participants had to judge whether a probe face, presented in either the left or the right visual field, and a central target face belonging to same person while emotional expression varied (Experiment 1) or to judge whether probe and target faces expressed the same emotion while identity was manipulated (Experiment 2). For accuracy we replicated the mutual facilitation effect between identity and emotion; no sex or hemispheric differences were found. Processing speed measurements, however, showed a lesser degree of interference in women than in men, especially for matching identity when faces expressed different emotions after a left visual presentation probe face. Psychophysical indices can be used to determine whether these effects are perceptual (A') or instead arise at a post-perceptual decision-making stage (B"). The influence of identity on the processing of facial emotion seems to be due to perceptual factors, whereas the influence of emotion changes on identity processing seems to be related to decisional factors. In addition, men seem to be more "conservative" after a LVF/RH probe-face presentation when processing identity. Women seem to benefit from better abilities to extract facial invariant aspects relative to identity.
Acosta-Rojas, E Ruthy; Comas, Mercè; Sala, Maria; Castells, Xavier
2006-10-01
To evaluate the association between visual impairment (visual acuity, contrast sensitivity, stereopsis) and patient-reported visual disability at different stages of cataract surgery. A cohort of 104 patients aged 60 years and over with bilateral cataract was assessed preoperatively, after first-eye surgery (monocular pseudophakia) and after second-eye surgery (binocular pseudophakia). Partial correlation coefficients (PCC) and linear regression models were calculated. In patients with bilateral cataracts, visual disability was associated with visual acuity (PCC = -0.30) and, to a lesser extent, with contrast sensitivity (PCC = 0.16) and stereopsis (PCC = -0.09). In monocular and binocular pseudophakia, visual disability was more strongly associated with stereopsis (PCC = -0.26 monocular and -0.51 binocular) and contrast sensitivity (PCC = 0.18 monocular and 0.34 binocular) than with visual acuity (PCC = -0.18 monocular and -0.18 binocular). Visual acuity, contrast sensitivity and stereopsis accounted for between 17% and 42% of variance in visual disability. The association of visual impairment with patient-reported visual disability differed at each stage of cataract surgery. Measuring other forms of visual impairment independently from visual acuity, such as contrast sensitivity or stereopsis, could be important in evaluating both needs and outcomes in cataract surgery. More comprehensive assessment of the impact of cataract on patients should include measurement of both visual impairment and visual disability.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2016-01-01
Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.
A Curriculum for Logical Thinking. NAAESC Occasional Papers, Volume 1, Number 4.
ERIC Educational Resources Information Center
Charuhas, Mary S.
The purpose of this paper is to demonstrate methods for developing cognitive processes in adult students. It discusses concept formation and concept attainment, problem solving (which involves concept formation and concept attainment), Bruner's three stages of learning (enactive, iconic, and symbolic modes), and visual thinking. A curriculum for…
Teaching Reading to the Disadvantaged Adult.
ERIC Educational Resources Information Center
Dinnan, James A.; Ulmer, Curtis, Ed.
This manual is designed to assess the background of the individual and to bring him to the stage of unlocking the symbolic codes called Reading and Mathematics. The manual begins with Introduction to a Symbolic Code (The Thinking Process and The Key to Learning Basis), and continues with Basic Reading Skills (Readiness, Visual Discrimination,…
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798
Distributed and opposing effects of incidental learning in the human brain.
Hall, Michelle G; Naughtin, Claire K; Mattingley, Jason B; Dux, Paul E
2018-06-01
Incidental learning affords a behavioural advantage when sensory information matches regularities that have previously been encountered. Previous studies have taken a focused approach by probing the involvement of specific candidate brain regions underlying incidentally acquired memory representations, as well as expectation effects on early sensory representations. Here, we investigated the broader extent of the brain's sensitivity to violations and fulfilments of expectations, using an incidental learning paradigm in which the contingencies between target locations and target identities were manipulated without participants' overt knowledge. Multivariate analysis of functional magnetic resonance imaging data was applied to compare the consistency of neural activity for visual events that the contingency manipulation rendered likely versus unlikely. We observed widespread sensitivity to expectations across frontal, temporal, occipital, and sub-cortical areas. These activation clusters showed distinct response profiles, such that some regions displayed more reliable activation patterns under fulfilled expectations, whereas others showed more reliable patterns when expectations were violated. These findings reveal that expectations affect multiple stages of information processing during visual decision making, rather than early sensory processing stages alone. Copyright © 2018 Elsevier Inc. All rights reserved.
Levichkina, Ekaterina; Saalmann, Yuri B; Vidyasagar, Trichur R
2017-03-01
Primate posterior parietal cortex (PPC) is known to be involved in controlling spatial attention. Neurons in one part of the PPC, the lateral intraparietal area (LIP), show enhanced responses to objects at attended locations. Although many are selective for object features, such as the orientation of a visual stimulus, it is not clear how LIP circuits integrate feature-selective information when providing attentional feedback about behaviorally relevant locations to the visual cortex. We studied the relationship between object feature and spatial attention properties of LIP cells in two macaques by measuring the cells' orientation selectivity and the degree of attentional enhancement while performing a delayed match-to-sample task. Monkeys had to match both the location and orientation of two visual gratings presented separately in time. We found a wide range in orientation selectivity and degree of attentional enhancement among LIP neurons. However, cells with significant attentional enhancement had much less orientation selectivity in their response than cells which showed no significant modulation by attention. Additionally, orientation-selective cells showed working memory activity for their preferred orientation, whereas cells showing attentional enhancement also synchronized with local neuronal activity. These results are consistent with models of selective attention incorporating two stages, where an initial feature-selective process guides a second stage of focal spatial attention. We suggest that LIP contributes to both stages, where the first stage involves orientation-selective LIP cells that support working memory of the relevant feature, and the second stage involves attention-enhanced LIP cells that synchronize to provide feedback on spatial priorities. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
Girman, S V; Lund, R D
2007-07-01
The uppermost layer (stratum griseum superficiale, SGS) of the superior colliculus (SC) provides an important gateway from the retina to the visual extrastriate and visuomotor systems. The majority of attention has been given to the role of this "visual" SC in saccade generation and target selection and it is generally considered to be less important in visual perception. We have found, however, that in the rat SGS1, the most superficial division of the SGS, the neurons perform very sophisticated analysis of visual information. First, in studying their responses with a variety of flashing stimuli we found that the neurons respond not to brightness changes per se, but to the appearance and/or disappearance of visual shapes in their receptive fields (RFs). Contrary to conventional RFs of neurons at the early stages of visual processing, the RFs in SGS1 cannot be described in terms of fixed spatial distribution of excitatory and inhibitory inputs. Second, SGS1 neurons showed robust orientation tuning to drifting gratings and orientation-specific modulation of the center response from surround. These are features previously seen only in visual cortical neurons and are considered to be involved in "contour" perception and figure-ground segregation. Third, responses of SGS1 neurons showed complex dynamics; typically the response tuning became progressively sharpened with repetitive grating periods. We conclude that SGS1 neurons are involved in considerably more complex analysis of retinal input than was previously thought. SGS1 may participate in early stages of figure-ground segregation and have a role in low-resolution nonconscious vision as encountered after visual decortication.
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W
2017-12-01
Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.
Perceptual load-dependent neural correlates of distractor interference inhibition.
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N
2011-01-18
The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.
Social Anxiety Modulates Subliminal Affective Priming
Paul, Elizabeth S.; Pope, Stuart A. J.; Fennell, John G.; Mendl, Michael T.
2012-01-01
Background It is well established that there is anxiety-related variation between observers in the very earliest, pre-attentive stage of visual processing of images such as emotionally expressive faces, often leading to enhanced attention to threat in a variety of disorders and traits. Whether there is also variation in early-stage affective (i.e. valenced) responses resulting from such images, however, is not yet known. The present study used the subliminal affective priming paradigm to investigate whether people varying in trait social anxiety also differ in their affective responses to very briefly presented, emotionally expressive face images. Methodology/Principal Findings Participants (n = 67) completed a subliminal affective priming task, in which briefly presented and smiling, neutral and angry faces were shown for 10 ms durations (below objective and subjective thresholds for visual discrimination), and immediately followed by a randomly selected Chinese character mask (2000 ms). Ratings of participants' liking for each Chinese character indicated the degree of valenced affective response made to the unseen emotive images. Participants' ratings of their liking for the Chinese characters were significantly influenced by the type of face image preceding them, with smiling faces generating more positive ratings than neutral and angry ones (F(2,128) = 3.107, p<0.05). Self-reported social anxiety was positively correlated with ratings of smiling relative to neutral-face primed characters (Pearson's r = .323, p<0.01). Individual variation in self-reported mood awareness was not associated with ratings. Conclusions Trait social anxiety is associated with individual variation in affective responding, even in response to the earliest, pre-attentive stage of visual image processing. However, the fact that these priming effects are limited to smiling and not angry (i.e. threatening) images leads us to propose that the pre-attentive processes involved in generating the subliminal affective priming effect may be different from those that generate attentional biases in anxious individuals. PMID:22615873
Social anxiety modulates subliminal affective priming.
Paul, Elizabeth S; Pope, Stuart A J; Fennell, John G; Mendl, Michael T
2012-01-01
It is well established that there is anxiety-related variation between observers in the very earliest, pre-attentive stage of visual processing of images such as emotionally expressive faces, often leading to enhanced attention to threat in a variety of disorders and traits. Whether there is also variation in early-stage affective (i.e. valenced) responses resulting from such images, however, is not yet known. The present study used the subliminal affective priming paradigm to investigate whether people varying in trait social anxiety also differ in their affective responses to very briefly presented, emotionally expressive face images. Participants (n = 67) completed a subliminal affective priming task, in which briefly presented and smiling, neutral and angry faces were shown for 10 ms durations (below objective and subjective thresholds for visual discrimination), and immediately followed by a randomly selected Chinese character mask (2000 ms). Ratings of participants' liking for each Chinese character indicated the degree of valenced affective response made to the unseen emotive images. Participants' ratings of their liking for the Chinese characters were significantly influenced by the type of face image preceding them, with smiling faces generating more positive ratings than neutral and angry ones (F(2,128) = 3.107, p<0.05). Self-reported social anxiety was positively correlated with ratings of smiling relative to neutral-face primed characters (Pearson's r = .323, p<0.01). Individual variation in self-reported mood awareness was not associated with ratings. Trait social anxiety is associated with individual variation in affective responding, even in response to the earliest, pre-attentive stage of visual image processing. However, the fact that these priming effects are limited to smiling and not angry (i.e. threatening) images leads us to propose that the pre-attentive processes involved in generating the subliminal affective priming effect may be different from those that generate attentional biases in anxious individuals.
Coggan, David D; Baker, Daniel H; Andrews, Timothy J
2016-01-01
Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80-100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80-100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.
Giraud, Anne Lise; Truy, Eric
2002-01-01
Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.
ERIC Educational Resources Information Center
Abbas, Rasha Al-Sayed Sabry
2017-01-01
This research aimed at investigating the effectiveness of STEM approach in developing visual reasoning and learning independence for preparatory stage students. To achieve this aim, the researcher designed a program based on STEM approach in light of the principles of nanotechnology. Twenty one preparatory stage students participated in the…
Wu, Lili; Gu, Ruolei; Zhang, Jianxin
2016-01-01
Attachment is critical to each individual. It affects the cognitive-affective processing of social information. The present study examines how attachment affects the processing of social information, specifically maternal information. We assessed the behavioral and electrophysiological responses to maternal information (compared to non-specific others) in a Go/No-go Association Task (GNAT) with 22 participants. The results illustrated that attachment affected maternal information processing during three sequential stages of information processing. First, attachment affected visual perception, reflected by enhanced P100 and N170 elicited by maternal information as compared to others information. Second, compared to others, mother obtained more attentional resources, reflected by faster behavioral response to maternal information and larger P200 and P300. Finally, mother was evaluated positively, reflected by shorter P300 latency in a mother + good condition as compared to a mother + bad condition. These findings indicated that the processing of attachment-relevant information is neurologically differentiated from other types of social information from an early stage of perceptual processing to late high-level processing.
How color enhances visual memory for natural scenes.
Spence, Ian; Wong, Patrick; Rusan, Maria; Rastegar, Naghmeh
2006-01-01
We offer a framework for understanding how color operates to improve visual memory for images of the natural environment, and we present an extensive data set that quantifies the contribution of color in the encoding and recognition phases. Using a continuous recognition task with colored and monochrome gray-scale images of natural scenes at short exposure durations, we found that color enhances recognition memory by conferring an advantage during encoding and by strengthening the encoding-specificity effect. Furthermore, because the pattern of performance was similar at all exposure durations, and because form and color are processed in different areas of cortex, the results imply that color must be bound as an integral part of the representation at the earliest stages of processing.
Xu, Hong-Ping; Burbridge, Timothy J.; Ye, Meijun; Chen, Minggang; Ge, Xinxin; Zhou, Z. Jimmy
2016-01-01
Retinal waves are correlated bursts of spontaneous activity whose spatiotemporal patterns are critical for early activity-dependent circuit elaboration and refinement in the mammalian visual system. Three separate developmental wave epochs or stages have been described, but the mechanism(s) of pattern generation of each and their distinct roles in visual circuit development remain incompletely understood. We used neuroanatomical, in vitro and in vivo electrophysiological, and optical imaging techniques in genetically manipulated mice to examine the mechanisms of wave initiation and propagation and the role of wave patterns in visual circuit development. Through deletion of β2 subunits of nicotinic acetylcholine receptors (β2-nAChRs) selectively from starburst amacrine cells (SACs), we show that mutual excitation among SACs is critical for Stage II (cholinergic) retinal wave propagation, supporting models of wave initiation and pattern generation from within a single retinal cell type. We also demonstrate that β2-nAChRs in SACs, and normal wave patterns, are necessary for eye-specific segregation. Finally, we show that Stage III (glutamatergic) retinal waves are not themselves necessary for normal eye-specific segregation, but elimination of both Stage II and Stage III retinal waves dramatically disrupts eye-specific segregation. This suggests that persistent Stage II retinal waves can adequately compensate for Stage III retinal wave loss during the development and refinement of eye-specific segregation. These experiments confirm key features of the “recurrent network” model for retinal wave propagation and clarify the roles of Stage II and Stage III retinal wave patterns in visual circuit development. SIGNIFICANCE STATEMENT Spontaneous activity drives early mammalian circuit development, but the initiation and patterning of activity vary across development and among modalities. Cholinergic “retinal waves” are initiated in starburst amacrine cells and propagate to retinal ganglion cells and higher-order visual areas, but the mechanism responsible for creating their unique and critical activity pattern is incompletely understood. We demonstrate that cholinergic wave patterns are dictated by recurrent connectivity within starburst amacrine cells, and retinal ganglion cells act as “readouts” of patterned activity. We also show that eye-specific segregation occurs normally without glutamatergic waves, but elimination of both cholinergic and glutamatergic waves completely disrupts visual circuit development. These results suggest that each retinal wave pattern during development is optimized for concurrently refining multiple visual circuits. PMID:27030771
Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis
Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie
2016-01-01
Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance. PMID:27471460
Visual Cortex Inspired CNN Model for Feature Construction in Text Analysis.
Fu, Hongping; Niu, Zhendong; Zhang, Chunxia; Ma, Jing; Chen, Jie
2016-01-01
Recently, biologically inspired models are gradually proposed to solve the problem in text analysis. Convolutional neural networks (CNN) are hierarchical artificial neural networks, which include a various of multilayer perceptrons. According to biological research, CNN can be improved by bringing in the attention modulation and memory processing of primate visual cortex. In this paper, we employ the above properties of primate visual cortex to improve CNN and propose a biological-mechanism-driven-feature-construction based answer recommendation method (BMFC-ARM), which is used to recommend the best answer for the corresponding given questions in community question answering. BMFC-ARM is an improved CNN with four channels respectively representing questions, answers, asker information and answerer information, and mainly contains two stages: biological mechanism driven feature construction (BMFC) and answer ranking. BMFC imitates the attention modulation property by introducing the asker information and answerer information of given questions and the similarity between them, and imitates the memory processing property through bringing in the user reputation information for answerers. Then the feature vector for answer ranking is constructed by fusing the asker-answerer similarities, answerer's reputation and the corresponding vectors of question, answer, asker, and answerer. Finally, the Softmax is used at the stage of answer ranking to get best answers by the feature vector. The experimental results of answer recommendation on the Stackexchange dataset show that BMFC-ARM exhibits better performance.
Students' Development of Representational Competence Through the Sense of Touch
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Balachandran, Sadhana
2017-06-01
Electromagnetism is an umbrella encapsulating several different concepts like electric current, electric fields and forces, and magnetic fields and forces, among other topics. However, a number of studies in the past have highlighted the poor conceptual understanding of electromagnetism concepts by students even after instruction. This study aims to identify novel forms of "hands-on" instruction that can result in representational competence and conceptual gain. Specifically, this study aimed to identify if the use of visuohaptic simulations can have an effect on student representations of electromagnetic-related concepts. The guiding questions is How do visuohaptic simulations influence undergraduate students' representations of electric forces? Participants included nine undergraduate students from science, technology, or engineering backgrounds who participated in a think-aloud procedure while interacting with a visuohaptic simulation. The think-aloud procedure was divided in three stages, a prediction stage, a minimally visual haptic stage, and a visually enhanced haptic stage. The results of this study suggest that students' accurately characterized and represented the forces felt around a particle, line, and ring charges either in the prediction stage, a minimally visual haptic stage or the visually enhanced haptic stage. Also, some students accurately depicted the three-dimensional nature of the field for each configuration in the two stages that included a tactile mode, where the point charge was the most challenging one.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
How to (and how not to) think about top-down influences on visual perception.
Teufel, Christoph; Nanay, Bence
2017-01-01
The question of whether cognition can influence perception has a long history in neuroscience and philosophy. Here, we outline a novel approach to this issue, arguing that it should be viewed within the framework of top-down information-processing. This approach leads to a reversal of the standard explanatory order of the cognitive penetration debate: we suggest studying top-down processing at various levels without preconceptions of perception or cognition. Once a clear picture has emerged about which processes have influences on those at lower levels, we can re-address the extent to which they should be considered perceptual or cognitive. Using top-down processing within the visual system as a model for higher-level influences, we argue that the current evidence indicates clear constraints on top-down influences at all stages of information processing; it does, however, not support the notion of a boundary between specific types of information-processing as proposed by the cognitive impenetrability hypothesis. Copyright © 2016 Elsevier Inc. All rights reserved.
Amygdala Response to Emotional Stimuli without Awareness: Facts and Interpretations
Diano, Matteo; Celeghin, Alessia; Bagnis, Arianna; Tamietto, Marco
2017-01-01
Over the past two decades, evidence has accumulated that the human amygdala exerts some of its functions also when the observer is not aware of the content, or even presence, of the triggering emotional stimulus. Nevertheless, there is as of yet no consensus on the limits and conditions that affect the extent of amygdala’s response without focused attention or awareness. Here we review past and recent studies on this subject, examining neuroimaging literature on healthy participants as well as brain-damaged patients, and we comment on their strengths and limits. We propose a theoretical distinction between processes involved in attentional unawareness, wherein the stimulus is potentially accessible to enter visual awareness but fails to do so because attention is diverted, and in sensory unawareness, wherein the stimulus fails to enter awareness because its normal processing in the visual cortex is suppressed. We argue this distinction, along with data sampling amygdala responses with high temporal resolution, helps to appreciate the multiplicity of functional and anatomical mechanisms centered on the amygdala and supporting its role in non-conscious emotion processing. Separate, but interacting, networks relay visual information to the amygdala exploiting different computational properties of subcortical and cortical routes, thereby supporting amygdala functions at different stages of emotion processing. This view reconciles some apparent contradictions in the literature, as well as seemingly contrasting proposals, such as the dual stage and the dual route model. We conclude that evidence in favor of the amygdala response without awareness is solid, albeit this response originates from different functional mechanisms and is driven by more complex neural networks than commonly assumed. Acknowledging the complexity of such mechanisms can foster new insights on the varieties of amygdala functions without awareness and their impact on human behavior. PMID:28119645
Visual imagery processing and knowledge of famous names in Alzheimer's disease and MCI.
Borg, Céline; Thomas-Antérion, Catherine; Bogey, Soline; Davier, Karine; Laurent, Bernard
2010-09-01
The study of memory for famous people and visual imagery retrieval was investigated in patients in the early stages of Alzheimer's disease (AD) and in the prodromal stage of AD, so-called Mild Cognitive Impairment (MCI). Fifteen patients with AD (MMSE > or = 23), 15 patients with amnestic MCI (a-MCI) and 15 normal controls (NC) performed a famous names test designed to evaluate the semantic and distinctive physical features knowledge of famous persons. Results indicated that patients with AD and a-MCI generated significantly less physical features and semantic biographical knowledge about famous persons than did normal control participants. Additionally, significant differences were observed between a-MCI and AD patients in all tasks. The present findings confirm recent studies reporting semantic memory impairment in MCI. Moreover, the current findings show that mental imagery is lowered in a-MCI and AD and is likely related to the early semantic impairment.
Discrimination of single features and conjunctions by children.
Taylor, M J; Chevalier, H; Lobaugh, N J
2003-12-01
Stimuli that are discriminated by a conjunction of features can show more rapid early processing in adults. To determine how this facilitation effect develops, the processing of visual features and their conjunction was examined in 7-12-year-old children. The children completed a series of tasks in which they made a target-non-target judgement as a function of shape only, colour only or shape and colour features, while event-related potentials were recorded. To assess early stages of feature processing the posteriorly distributed P1 and N1 were analysed. Attentional effects were seen for both components. P1 had a shorter latency and P1 and N1 had larger amplitudes to targets than non-targets. Task effects were driven by the conjunction task. P1 amplitude was largest, while N1 amplitude was smallest for the conjunction targets. In contrast to larger left-sided N1 in adults, N1 had a symmetrical distribution in the children. N1 latency was shortest for the conjunction targets in the 9-10-year olds and 11-12-year olds, demonstrating facilitation in children, but which continued to develop over the pre-teen years. These data underline the sensitivity of early stages of processing to both top-down modulations and the parallel binding of non-spatial features in young children. Furthermore, facilitation effects, increased speed of processing when features need to be conjoined, mature in mid-childhood, arguing against a hierarchical model of visual processing, and supporting a rapid, integrated facilitative model.
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Pathological findings in retina and visual pathways associated to natural Scrapie in sheep.
Hortells, Paloma; Monzón, Marta; Monleón, Eva; Acín, Cristina; Vargas, Antonia; Bolea, Rosa; Luján, Lluís; Badiola, Juan José
2006-09-07
This work represents a comprehensive pathological description of the retina and visual pathways in naturally affected Scrapie sheep. Twenty naturally affected Scrapie sheep and 6 matched controls were used. Eyes, optic nerves and brain from each animal were fixed and histologically processed using hematoxylin-eosin, followed by immunohistochemical staining for prion protein (PrPsc) and glial fibrillar acidic protein (GFAP). Retinal histopathological changes were observed in only 7 clinically affected animals and mainly consisted of loss of outer limitant layer definition, outer plexiform layer atrophy, disorganization and loss of nuclei in both nuclear layers, and Müller glia hypertrophy. PrPsc was detected in the retina of 19 of the 20 sheep and characterized by a disseminated granular deposit across layers and intraneuronally in ganglion cells. The inner plexiform and the ganglion cell layers were the structures most severely affected by PrPsc deposits. PrPsc exhibited a tendency to spread from these two layers to the others. A marked increase in the number and intensity of GFAP-expressing Müller cells was observed in the clinical stage, especially at the terminal stage of the disease. Spongiosis and PrPsc were detected within the visual pathways at the preclinical stage, their values increasing during the course of the disease but varying between the areas examined. PrPsc was detected in only 3 optic nerves. The results suggest that the presence of PrPsc in the retina correlates with disease progression during the preclinical and clinical stages, perhaps using the inner plexiform layer as a first entry site and diffusing from the brain using a centrifugal model.
Yamada, Shigehito; Uwabe, Chigako; Nakatsu-Komatsu, Tomoko; Minekura, Yutaka; Iwakura, Masaji; Motoki, Tamaki; Nishimiya, Kazuhiko; Iiyama, Masaaki; Kakusho, Koh; Minoh, Michihiko; Mizuta, Shinobu; Matsuda, Tetsuya; Matsuda, Yoshimasa; Haishi, Tomoyuki; Kose, Katsumi; Fujii, Shingo; Shiota, Kohei
2006-02-01
Morphogenesis in the developing embryo takes place in three dimensions, and in addition, the dimension of time is another important factor in development. Therefore, the presentation of sequential morphological changes occurring in the embryo (4D visualization) is essential for understanding the complex morphogenetic events and the underlying mechanisms. Until recently, 3D visualization of embryonic structures was possible only by reconstruction from serial histological sections, which was tedious and time-consuming. During the past two decades, 3D imaging techniques have made significant advances thanks to the progress in imaging and computer technologies, computer graphics, and other related techniques. Such novel tools have enabled precise visualization of the 3D topology of embryonic structures and to demonstrate spatiotemporal 4D sequences of organogenesis. Here, we describe a project in which staged human embryos are imaged by the magnetic resonance (MR) microscope, and 3D images of embryos and their organs at each developmental stage were reconstructed based on the MR data, with the aid of computer graphics techniques. On the basis of the 3D models of staged human embryos, we constructed a data set of 3D images of human embryos and made movies to illustrate the sequential process of human morphogenesis. Furthermore, a computer-based self-learning program of human embryology is being developed for educational purposes, using the photographs, histological sections, MR images, and 3D models of staged human embryos. Copyright 2005 Wiley-Liss, Inc.
Virtual Display Design and Evaluation of Clothing: A Design Process Support System
ERIC Educational Resources Information Center
Zhang, Xue-Fang; Huang, Ren-Qun
2014-01-01
This paper proposes a new computer-aided educational system for clothing visual merchandising and display. It aims to provide an operating environment that supports the various stages of display design in a user-friendly and intuitive manner. First, this paper provides a brief introduction to current software applications in the field of…
ERIC Educational Resources Information Center
Bertone, Armando; Hanck, Julie; Kogan, Cary; Chaudhuri, Avi; Cornish, Kim
2010-01-01
We have previously described (see companion paper, this issue) the utility of using perceptual signatures for defining and dissociating condition-specific neural functioning underlying early visual processes in autism and FXS. These perceptually-driven hypotheses are based on differential performance evidenced only at the earliest stages of visual…
Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi
In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.
Perinetti, Giuseppe; Caprioglio, Alberto; Contardo, Luca
2014-11-01
To evaluate the diagnostic accuracy and repeatability of the visual assessment of the cervical vertebral maturation (CVM) stages. Ten operators underwent training sessions in visual assessment of CVM staging. Subsequently, they were asked to stage 72 cases equally divided into the six stages. Such assessment was repeated twice in two sessions (T1 and T2) 4 weeks apart. A reference standard for each case was created according to a cephalometric analysis of both the concavities and shapes of the cervical vertebrae. The overall agreement with the reference standard was about 68% for both sessions and 76.9% for intrarater repeatability. The overall kappa coefficients with the reference standard were up to 0.86 for both sessions, and 0.88 for intrarater repeatability. Overall, disagreements one stage and twp stage apart were 23.5% (T1) and 5.1% (T2), respectively. Sensitivity ranged from 53.3% for CS5 (T1) to 99.9% for CS1 (T2), positive predictive values ranged from 52.4% for CS5 (T2) to 94.3% for CS6 (T1), and accuracy ranged from 83.6% for CS4 (T2) to 94.9% for CS1 (T1). Visual assessment of the CVM stages is accurate and repeatable to a satisfactory level. About one in three cases remain misclassified; disagreement is generally limited to one stage and is mostly seen in stages 4 and 5.
Face recognition increases during saccade preparation.
Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian
2014-01-01
Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.
Zeitoun, Jack H.; Kim, Hyungtae
2017-01-01
Binocular mechanisms for visual processing are thought to enhance spatial acuity by combining matched input from the two eyes. Studies in the primary visual cortex of carnivores and primates have confirmed that eye-specific neuronal response properties are largely matched. In recent years, the mouse has emerged as a prominent model for binocular visual processing, yet little is known about the spatial frequency tuning of binocular responses in mouse visual cortex. Using calcium imaging in awake mice of both sexes, we show that the spatial frequency preference of cortical responses to the contralateral eye is ∼35% higher than responses to the ipsilateral eye. Furthermore, we find that neurons in binocular visual cortex that respond only to the contralateral eye are tuned to higher spatial frequencies. Binocular neurons that are well matched in spatial frequency preference are also matched in orientation preference. In contrast, we observe that binocularly mismatched cells are more mismatched in orientation tuning. Furthermore, we find that contralateral responses are more direction-selective than ipsilateral responses and are strongly biased to the cardinal directions. The contralateral bias of high spatial frequency tuning was found in both awake and anesthetized recordings. The distinct properties of contralateral cortical responses may reflect the functional segregation of direction-selective, high spatial frequency-preferring neurons in earlier stages of the central visual pathway. Moreover, these results suggest that the development of binocularity and visual acuity may engage distinct circuits in the mouse visual system. SIGNIFICANCE STATEMENT Seeing through two eyes is thought to improve visual acuity by enhancing sensitivity to fine edges. Using calcium imaging of cellular responses in awake mice, we find surprising asymmetries in the spatial processing of eye-specific visual input in binocular primary visual cortex. The contralateral visual pathway is tuned to higher spatial frequencies than the ipsilateral pathway. At the highest spatial frequencies, the contralateral pathway strongly prefers to respond to visual stimuli along the cardinal (horizontal and vertical) axes. These results suggest that monocular, and not binocular, mechanisms set the limit of spatial acuity in mice. Furthermore, they suggest that the development of visual acuity and binocularity in mice involves different circuits. PMID:28924011
Multisensory brand search: How the meaning of sounds guides consumers' visual attention.
Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles
2016-06-01
Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Iwasaki, Miho; Noguchi, Yasuki; Kakigi, Ryusuke
2018-06-07
Some researchers in aesthetics assume visual features related to aesthetic perception (e.g. golden ratio and symmetry) commonly embedded in masterpieces. If this is true, an intriguing hypothesis is that the human brain has neural circuitry specialized for the processing of visual beauty. We presently tested this hypothesis by combining a neuroimaging technique with the repetition suppression (RS) paradigm. Subjects (non-experts in art) viewed two images of sculptures sequentially presented. Some sculptures obeyed the golden ratio (canonical images), while the golden proportion were impaired in other sculptures (deformed images). We found that the occipito-temporal cortex in the right hemisphere showed the RS when a canonical sculpture (e.g. Venus de Milo) was repeatedly presented, but not when its deformed version was repeated. Furthermore, the right parietal cortex showed the RS to the canonical proportion even when two sculptures had different identities (e.g. Venus de Milo as the first stimulus and David di Michelangelo as the second), indicating that this region encodes the golden ratio as an abstract rule shared by different sculptures. Those results suggest two separate stages of neural processing for aesthetic information (one in the occipito-temporal and another in the parietal regions) that are hierarchically arranged in the human brain.
Carreon-Martinez, Lucia B.; Walter, Ryan P.; Johnson, Timothy B.; Ludsin, Stuart A.; Heath, Daniel D.
2015-01-01
Nutrient-rich, turbid river plumes that are common to large lakes and coastal marine ecosystems have been hypothesized to benefit survival of fish during early life stages by increasing food availability and (or) reducing vulnerability to visual predators. However, evidence that river plumes truly benefit the recruitment process remains meager for both freshwater and marine fishes. Here, we use genotype assignment between juvenile and larval yellow perch (Perca flavescens) from western Lake Erie to estimate and compare recruitment to the age-0 juvenile stage for larvae residing inside the highly turbid, south-shore Maumee River plume versus those occupying the less turbid, more northerly Detroit River plume. Bayesian genotype assignment of a mixed assemblage of juvenile (age-0) yellow perch to putative larval source populations established that recruitment of larvae was higher from the turbid Maumee River plume than for the less turbid Detroit River plume during 2006 and 2007, but not in 2008. Our findings add to the growing evidence that turbid river plumes can indeed enhance survival of fish larvae to recruited life stages, and also demonstrate how novel population genetic analyses of early life stages can contribute to determining critical early life stage processes in the fish recruitment process. PMID:25954968
Acuity-independent effects of visual deprivation on human visual cortex
Hou, Chuan; Pettet, Mark W.; Norcia, Anthony M.
2014-01-01
Visual development depends on sensory input during an early developmental critical period. Deviation of the pointing direction of the two eyes (strabismus) or chronic optical blur (anisometropia) separately and together can disrupt the formation of normal binocular interactions and the development of spatial processing, leading to a loss of stereopsis and visual acuity known as amblyopia. To shed new light on how these two different forms of visual deprivation affect the development of visual cortex, we used event-related potentials (ERPs) to study the temporal evolution of visual responses in patients who had experienced either strabismus or anisometropia early in life. To make a specific statement about the locus of deprivation effects, we took advantage of a stimulation paradigm in which we could measure deprivation effects that arise either before or after a configuration-specific response to illusory contours (ICs). Extraction of ICs is known to first occur in extrastriate visual areas. Our ERP measurements indicate that deprivation via strabismus affects both the early part of the evoked response that occurs before ICs are formed as well as the later IC-selective response. Importantly, these effects are found in the normal-acuity nonamblyopic eyes of strabismic amblyopes and in both eyes of strabismic patients without amblyopia. The nonamblyopic eyes of anisometropic amblyopes, by contrast, are normal. Our results indicate that beyond the well-known effects of strabismus on the development of normal binocularity, it also affects the early stages of monocular feature processing in an acuity-independent fashion. PMID:25024230
Visual event-related potential changes in two subtypes of multiple system atrophy, MSA-C and MSA-P.
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki; Wang, Lihong; Li, Mei; Suzuki, Yume; Takahashi, Tatsuya; Ikegami, Tadashi; Matsubara, Sho
2002-08-01
We investigated the visual event-related potentials (ERPs) in two subtypes of multisystem atrophy (MSA) in 15 MSA-C patients, 12 MSA-P patients, and 21 normal control (NC) subjects. We used a visual oddball task to elicit ERPs. No significant changes were seen in N1 or N2 latency, in either MSA-C or MSA-P, compared with the NC group. An early stage of visual information process related to N1 and a visual discrimination process related to N2 might be preserved in both MSA-C and MSA-P. The P3a peak was more frequently undetectable in MSA than in the NC group. Significant P3a amplitude reduction in both MSA-C and MSA-P suggests impairment of the automatic cognitive processing in both MSA-C and MSA-P. Significant difference was found in P3b latency and P3b amplitude only in MSA-C, compared with the NC group. The result suggests the impairment of the controlled cognitive processing after the visual discrimination process in the MSA-C group. We further investigated the correlation between visual ERP changes and magnetic resonance imaging (MRI) data. Quantitative MRI measurements showed reduced size of the pons, cerebellum, perisylvian cerebral area, and deep cerebral gray matter in both MSA-C and MSA-P, and of the corpus callosum only in MSA-P, as compared to NC group. In both MSA-C and MSA-P, P3b latency was significantly correlated with the size on MRI of the pons and the cerebellum. P3b latency in the whole MSA group was also significantly correlated with the size of the pons and the cerebellum. These results indicate that P3b latency changes in parallel with the volume of the pons and the cerebellum in both MSA-C and MSA-P.
Glicerina, Virginia; Balestra, Federica; Dalla Rosa, Marco; Bergenhstål, Bjorn; Tornberg, Eva; Romani, Santina
2014-07-01
The effect of different process stages on microstructural and visual properties of dark chocolate was studied. Samples were obtained at each phase of the manufacture process: mixing, prerefining, refining, conching, and tempering. A laser light diffraction technique and environmental scanning electron microscopy (ESEM) were used to study the particle size distribution (PSD) and to analyze modifications in the network structure. Moreover, colorimetric analyses (L*, h°, and C*) were performed on all samples. Each stage influenced in stronger way the microstructural characteristic of products and above all the PSD. Sauter diameter (D [3.2]) decreased from 5.44 μm of mixed chocolate sample to 3.83 μm, of the refined one. ESEM analysis also revealed wide variations in the network structure of samples during the process, with an increase of the aggregation and contact point between particles from mixing to refining stage. Samples obtained from the conching and tempering were characterized by small PS, and a less dense aggregate structure. From color results, samples with the finest particles, having larger specific surface area and the smallest diameter, appeared lighter and more saturated than those with coarse particles. Final quality of food dispersions is affected by network and particles characteristics. The deep knowledge of the influence of single processing stage on chocolate microstructural properties is useful in order to improve or modify final product characteristics. ESEM and laser diffraction are suitable techniques to study changes in chocolate microstructure. © 2014 Institute of Food Technologists®
The role of neuroimaging in the discovery of processing stages. A review.
Mulder, G; Wijers, A A; Lange, J J; Buijink, B M; Mulder, L J; Willemsen, A T; Paans, A M
1995-11-01
In this contribution we show how neuroimaging methods can augment behavioural methods to discover processing stages. Event Related Brain Potentials (ERPs), Brain Electrical Source Analysis (BESA) and regional changes in cerebral blood flow (rCBF) do not necessarily require behavioural responses. With the aid of rCBF we are able to discover several cortical and subcortical brain systems (processors) active in selective attention and memory search tasks. BESA describes cortical activity with high temporal resolution in terms of a limited number of neural generators within these brain systems. The combination of behavioural methods and neuroimaging provides a picture of the functional architecture of the brain. The review is organized around three processors: the Visual, Cognitive and Manual Motor Processors.
Visual Biofeedback using trans-perineal ultrasound during the second stage of labor.
Gilboa, Yinon; Frenkel, Tahl I; Schlesinger, Yael; Rousseau, Sofie; Hamiel, Daniel; Achiron, Reuven; Perlman, Sharon
2017-11-20
to assess the obstetrical and psychological effect of visual biofeedback using trans-perineal ultrasound (TPU) during the second stage of labor. Visual biofeedback using TPU was performed prospectively during the second stage of labor in twenty-six low risk nulliparous women. Pushing efficacy was assessed by the angle of progression at rest and during pushing efforts before and after observing the ultrasound screen. Obstetrical outcomes included level of perineal tearing, mode of delivery and length of the second stage. Psychological outcomes were assessed via self-report measures during the postnatal hospital stay. These included measures of perceived control and maternal satisfaction with childbirth as well as level of maternal feelings of connectedness toward her newborn. Obstetrical and psychological results were compared to a control group (n=69) who received standard obstetrical coaching by midwifes. Pushing efficacy significantly increased following visual biofeedback by TPU (p = 0.01). A significant association was found between the visual biofeedback and an intact perineum following delivery (p = 0.03). No significant differences were found in regard to mode of delivery or the length of the second stage. Feelings of maternal connectedness towards the newborn were significantly higher in the visual biofeedback group relative to non-biofeedback controls (p = 0.003). The results of this pilot study implicate that TPU may serve as a complementary tool to coached maternal pushing during the second stage of labor with obstetrical as well as psychological benefits. Further studies are required to confirm our findings and define the exact timing for optimal results. This article is protected by copyright. All rights reserved.
Attractive Flicker--Guiding Attention in Dynamic Narrative Visualizations.
Waldner, Manuela; Le Muzic, Mathieu; Bernhard, Matthias; Purgathofer, Werner; Viola, Ivan
2014-12-01
Focus+context techniques provide visual guidance in visualizations by giving strong visual prominence to elements of interest while the context is suppressed. However, finding a visual feature to enhance for the focus to pop out from its context in a large dynamic scene, while leading to minimal visual deformation and subjective disturbance, is challenging. This paper proposes Attractive Flicker, a novel technique for visual guidance in dynamic narrative visualizations. We first show that flicker is a strong visual attractor in the entire visual field, without distorting, suppressing, or adding any scene elements. The novel aspect of our Attractive Flicker technique is that it consists of two signal stages: The first "orientation stage" is a short but intensive flicker stimulus to attract the attention to elements of interest. Subsequently, the intensive flicker is reduced to a minimally disturbing luminance oscillation ("engagement stage") as visual support to keep track of the focus elements. To find a good trade-off between attraction effectiveness and subjective annoyance caused by flicker, we conducted two perceptual studies to find suitable signal parameters. We showcase Attractive Flicker with the parameters obtained from the perceptual statistics in a study of molecular interactions. With Attractive Flicker, users were able to easily follow the narrative of the visualization on a large display, while the flickering of focus elements was not disturbing when observing the context.
Cognitive processing of orientation discrimination in anisometropic amblyopia.
Wang, Jianglan; Zhao, Jiao; Wang, Shoujing; Gong, Rui; Zheng, Zhong; Liu, Longqian
2017-01-01
Cognition is very important in our daily life. However, amblyopia has abnormal visual cognition. Physiological changes of the brain during processes of cognition could be reflected with ERPs. So the purpose of this study was to investigate the speed and the capacity of resource allocation in visual cognitive processing in orientation discrimination task during monocular and binocular viewing conditions of amblyopia and normal control as well as the corresponding eyes of the two groups with ERPs. We also sought to investigate whether the speed and the capacity of resource allocation in visual cognitive processing vary with target stimuli at different spatial frequencies (3, 6 and 9 cpd) in amblyopia and normal control as well as between the corresponding eyes of the two groups. Fifteen mild to moderate anisometropic amblyopes and ten normal controls were recruited. Three-stimulus oddball paradigms of three different spatial frequency orientation discrimination tasks were used in monocular and binocular conditions in amblyopes and normal controls to elicit event-related potentials (ERPs). Accuracy (ACC), reaction time (RT), the latency of novelty P300 and P3b, and the amplitude of novelty P300 and P3b were measured. Results showed that RT was longer in the amblyopic eye than in both eyes of amblyopia and non-dominant eye in control. Novelty P300 amplitude was largest in the amblyopic eye, followed by the fellow eye, and smallest in both eyes of amblyopia. Novelty P300 amplitude was larger in the amblyopic eye than non-dominant eye and was larger in fellow eye than dominant eye. P3b latency was longer in the amblyopic eye than in the fellow eye, both eyes of amblyopia and non-dominant eye of control. P3b latency was not associated with RT in amblyopia. Neural responses of the amblyopic eye are abnormal at the middle and late stages of cognitive processing, indicating that the amblyopic eye needs to spend more time or integrate more resources to process the same visual task. Fellow eye and both eyes in amblyopia are slightly different from the dominant eye and both eyes in normal control at the middle and late stages of cognitive processing. Meanwhile, abnormal extents of amblyopic eye do not vary with three different spatial frequencies used in our study.
Cognitive processing of orientation discrimination in anisometropic amblyopia
Wang, Jianglan; Zhao, Jiao; Wang, Shoujing; Gong, Rui; Zheng, Zhong; Liu, Longqian
2017-01-01
Cognition is very important in our daily life. However, amblyopia has abnormal visual cognition. Physiological changes of the brain during processes of cognition could be reflected with ERPs. So the purpose of this study was to investigate the speed and the capacity of resource allocation in visual cognitive processing in orientation discrimination task during monocular and binocular viewing conditions of amblyopia and normal control as well as the corresponding eyes of the two groups with ERPs. We also sought to investigate whether the speed and the capacity of resource allocation in visual cognitive processing vary with target stimuli at different spatial frequencies (3, 6 and 9 cpd) in amblyopia and normal control as well as between the corresponding eyes of the two groups. Fifteen mild to moderate anisometropic amblyopes and ten normal controls were recruited. Three-stimulus oddball paradigms of three different spatial frequency orientation discrimination tasks were used in monocular and binocular conditions in amblyopes and normal controls to elicit event-related potentials (ERPs). Accuracy (ACC), reaction time (RT), the latency of novelty P300 and P3b, and the amplitude of novelty P300 and P3b were measured. Results showed that RT was longer in the amblyopic eye than in both eyes of amblyopia and non-dominant eye in control. Novelty P300 amplitude was largest in the amblyopic eye, followed by the fellow eye, and smallest in both eyes of amblyopia. Novelty P300 amplitude was larger in the amblyopic eye than non-dominant eye and was larger in fellow eye than dominant eye. P3b latency was longer in the amblyopic eye than in the fellow eye, both eyes of amblyopia and non-dominant eye of control. P3b latency was not associated with RT in amblyopia. Neural responses of the amblyopic eye are abnormal at the middle and late stages of cognitive processing, indicating that the amblyopic eye needs to spend more time or integrate more resources to process the same visual task. Fellow eye and both eyes in amblyopia are slightly different from the dominant eye and both eyes in normal control at the middle and late stages of cognitive processing. Meanwhile, abnormal extents of amblyopic eye do not vary with three different spatial frequencies used in our study. PMID:29023501
Flow structure of natural dehumidification over a horizontal finned-tube
NASA Astrophysics Data System (ADS)
Hirbodi, Kamran; Yaghoubi, Mahmood
2016-08-01
In the present study, structure of water drops formation, growth, coalescence and departure over a horizontal finned-tube during natural dehumidification is investigated experimentally. Starting time of repelling the drops as well as heat transfer rate and the rate of dripping condensates in quasi-steady-state conditions are presented. Furthermore, cold airflow pattern around the horizontal finned-tube is visualized by using smoke generation scheme during natural dehumidification process. The finned-tube has a length of 300 mm, and inner and outer fin diameters, fin thickness and fin spacing are 25.4, 56, 0.4 and 2 mm, respectively. The tests are conducted in an insulated control room with dimensions of 5.8 m × 3 m × 4 m. Ambient air temperature, relative humidity and fin base temperature are selected from 25 to 35 °C, from 40 to 70 % and from 4 to 8 °C, respectively. Observations show that natural condensation from humid air over the test case is completely dropwise. Droplets only form on the edge of the fin and lateral fin surfaces remain almost dry. Dehumidification process over the tested finned-tube is divided into four stages; nucleation, formation, growth and departure of drops. It is also observed that the condensate inundation leaves the tube bottom in the form of droplets. Smoke visualization depicts that humid airflows downward around the cold finned-tube surface without noticeable turbulence and separation in the initial stages of dehumidification process. But the airflow has some disturbances in the intermediate stage and especially during drop departure on the edge of the fins.
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
Waddell, George; Williamon, Aaron
2017-01-01
Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other “non-musical” features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the “inappropriate” stage entrance made judgments significantly more quickly than those viewing the “appropriate” entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual information in forming evaluative and aesthetic judgments in musical contexts and highlight how visual cues dynamically influence those judgments over time. PMID:28487662
Ebrahimi, F; Mikaili, M; Estrada, E; Nazeran, H
2007-01-01
Staging and detection of various states of sleep derived from EEG and other biomedical signals have proven to be very helpful in diagnosis, prognosis and remedy of various sleep related disorders. The time consuming and costly process of visual scoring of sleep stages by a specialist has always motivated researchers to develop an automatic sleep scoring system and the first step toward achieving this task is finding discriminating characteristics (or features) for each stage. A vast variety of these features and methods have been investigated in the sleep literature with different degrees of success. In this study, we investigated the performance of a newly introduced measure: the Itakura Distance (ID), as a similarity measure between EEG and EOG signals. This work demonstrated and further confirmed the outcomes of our previous research that the Itakura Distance serves as a valuable similarity measure to differentiate between different sleep stages.
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.
EEGVIS: A MATLAB Toolbox for Browsing, Exploring, and Viewing Large Datasets.
Robbins, Kay A
2012-01-01
Recent advances in data monitoring and sensor technology have accelerated the acquisition of very large data sets. Streaming data sets from instrumentation such as multi-channel EEG recording usually must undergo substantial pre-processing and artifact removal. Even when using automated procedures, most scientists engage in laborious manual examination and processing to assure high quality data and to indentify interesting or problematic data segments. Researchers also do not have a convenient method of method of visually assessing the effects of applying any stage in a processing pipeline. EEGVIS is a MATLAB toolbox that allows users to quickly explore multi-channel EEG and other large array-based data sets using multi-scale drill-down techniques. Customizable summary views reveal potentially interesting sections of data, which users can explore further by clicking to examine using detailed viewing components. The viewer and a companion browser are built on our MoBBED framework, which has a library of modular viewing components that can be mixed and matched to best reveal structure. Users can easily create new viewers for their specific data without any programming during the exploration process. These viewers automatically support pan, zoom, resizing of individual components, and cursor exploration. The toolbox can be used directly in MATLAB at any stage in a processing pipeline, as a plug-in for EEGLAB, or as a standalone precompiled application without MATLAB running. EEGVIS and its supporting packages are freely available under the GNU general public license at http://visual.cs.utsa.edu/eegvis.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models.
Le Muzic, M; Mindek, P; Sorger, J; Autin, L; Goodsell, D; Viola, I
2016-06-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models
Le Muzic, M.; Mindek, P.; Sorger, J.; Autin, L.; Goodsell, D.; Viola, I.
2017-01-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes. PMID:28344374
Bellussi, F; Alcamisi, L; Guizzardi, G; Parma, D; Pilu, G
2018-03-13
To investigate the usefulness of visual biofeedback using transperineal ultrasound to improve coached pushing during the active second stage of labor in nulliparous women. This was a randomized controlled trial of low-risk nulliparous women in the active second stage of labor. Patients were allocated to either coached pushing aided by visual demonstration on transperineal ultrasound of the progress of the fetal head (sonographic coaching) or traditional coaching. Patients in both groups were coached by an obstetrician for the first 20 min of the active second stage of labor and, subsequently, the labor was supervised by a midwife. Primary outcomes were duration of the active second stage and increase in the angle of progression at the end of the coaching process. Secondary outcomes included the incidence of operative delivery and complications of labor. Forty women were recruited into the study. Those who received sonographic coaching had a shorter active phase of the second stage (30 min (interquartile range (IQR), 24-42 min) vs 45 min (IQR, 39-55 min); P = 0.01) and a greater increase in the angle of progression (13.5° (IQR, 9-20°) vs 5° (IQR, 3-9.5°); P = 0.01) in the first 20 min of the active second stage of labor than did those who had traditional coaching. No differences were found in the secondary outcomes between the two groups. Our preliminary data suggest that transperineal ultrasound may be a useful adjunct to coached pushing during the active second stage of labor. Further studies are required to confirm these findings and better define the benefits of this approach. Copyright © 2018 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2018 ISUOG. Published by John Wiley & Sons Ltd.
Temporal Processing in the Olfactory System: Can We See a Smell?
Gire, David H.; Restrepo, Diego; Sejnowski, Terrence J.; Greer, Charles; De Carlos, Juan A.; Lopez-Mascaraque, Laura
2013-01-01
Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing. PMID:23664611
Attractive Serial Dependence in the Absence of an Explicit Task.
Fornaciai, Michele; Park, Joonkoo
2018-03-01
Attractive serial dependence refers to an adaptive change in the representation of sensory information, whereby a current stimulus appears to be similar to a previous one. The nature of this phenomenon is controversial, however, as serial dependence could arise from biased perceptual representations or from biased traces of working memory representation at a decisional stage. Here, we demonstrated a neural signature of serial dependence in numerosity perception emerging early in the visual processing stream even in the absence of an explicit task. Furthermore, a psychophysical experiment revealed that numerosity perception is biased by a previously presented stimulus in an attractive way, not by repulsive adaptation. These results suggest that serial dependence is a perceptual phenomenon starting from early levels of visual processing and occurring independently from a decision process, which is consistent with the view that these biases smooth out noise from neural signals to establish perceptual continuity.
Xu, Hong-Ping; Burbridge, Timothy J; Ye, Meijun; Chen, Minggang; Ge, Xinxin; Zhou, Z Jimmy; Crair, Michael C
2016-03-30
Retinal waves are correlated bursts of spontaneous activity whose spatiotemporal patterns are critical for early activity-dependent circuit elaboration and refinement in the mammalian visual system. Three separate developmental wave epochs or stages have been described, but the mechanism(s) of pattern generation of each and their distinct roles in visual circuit development remain incompletely understood. We used neuroanatomical,in vitroandin vivoelectrophysiological, and optical imaging techniques in genetically manipulated mice to examine the mechanisms of wave initiation and propagation and the role of wave patterns in visual circuit development. Through deletion of β2 subunits of nicotinic acetylcholine receptors (β2-nAChRs) selectively from starburst amacrine cells (SACs), we show that mutual excitation among SACs is critical for Stage II (cholinergic) retinal wave propagation, supporting models of wave initiation and pattern generation from within a single retinal cell type. We also demonstrate that β2-nAChRs in SACs, and normal wave patterns, are necessary for eye-specific segregation. Finally, we show that Stage III (glutamatergic) retinal waves are not themselves necessary for normal eye-specific segregation, but elimination of both Stage II and Stage III retinal waves dramatically disrupts eye-specific segregation. This suggests that persistent Stage II retinal waves can adequately compensate for Stage III retinal wave loss during the development and refinement of eye-specific segregation. These experiments confirm key features of the "recurrent network" model for retinal wave propagation and clarify the roles of Stage II and Stage III retinal wave patterns in visual circuit development. Spontaneous activity drives early mammalian circuit development, but the initiation and patterning of activity vary across development and among modalities. Cholinergic "retinal waves" are initiated in starburst amacrine cells and propagate to retinal ganglion cells and higher-order visual areas, but the mechanism responsible for creating their unique and critical activity pattern is incompletely understood. We demonstrate that cholinergic wave patterns are dictated by recurrent connectivity within starburst amacrine cells, and retinal ganglion cells act as "readouts" of patterned activity. We also show that eye-specific segregation occurs normally without glutamatergic waves, but elimination of both cholinergic and glutamatergic waves completely disrupts visual circuit development. These results suggest that each retinal wave pattern during development is optimized for concurrently refining multiple visual circuits. Copyright © 2016 the authors 0270-6474/16/363872-16$15.00/0.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Salience of the lambs: a test of the saliency map hypothesis with pictures of emotive objects.
Humphrey, Katherine; Underwood, Geoffrey; Lambert, Tony
2012-01-25
Humans have an ability to rapidly detect emotive stimuli. However, many emotional objects in a scene are also highly visually salient, which raises the question of how dependent the effects of emotionality are on visual saliency and whether the presence of an emotional object changes the power of a more visually salient object in attracting attention. Participants were shown a set of positive, negative, and neutral pictures and completed recall and recognition memory tests. Eye movement data revealed that visual saliency does influence eye movements, but the effect is reliably reduced when an emotional object is present. Pictures containing negative objects were recognized more accurately and recalled in greater detail, and participants fixated more on negative objects than positive or neutral ones. Initial fixations were more likely to be on emotional objects than more visually salient neutral ones, suggesting that the processing of emotional features occurs at a very early stage of perception.
The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.
van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R
2018-05-04
Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Perceptual Load-Dependent Neural Correlates of Distractor Interference Inhibition
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M.; Potenza, Marc N.
2011-01-01
Background The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. Methodology/Principal Findings We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Conclusions Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load. PMID:21267080
Facial decoding in schizophrenia is underpinned by basic visual processing impairments.
Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric
2017-09-01
Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Woollams, Anna M.; Silani, Giorgia; Okada, Kayoko; Patterson, Karalyn; Price, Cathy J.
2011-01-01
Prior lesion and functional imaging studies have highlighted the importance of the left ventral occipito-temporal (LvOT) cortex for visual word recognition. Within this area, there is a posterior-anterior hierarchy of subregions that are specialized for different stages of orthographic processing. The aim of the present fMRI study was to…
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.
Reliability of visual diagnosis of endometriosis.
Fernando, Shavi; Soh, Pei Qian; Cooper, Michael; Evans, Susan; Reid, Geoffrey; Tsaltas, Jim; Rombauts, Luk
2013-01-01
To determine whether accuracy of visual diagnosis of endometriosis at laparoscopy is determined by stage of disease. Prospective longitudinal cohort study (Canadian Task Force classification II-2). Tertiary referral centers in three Australian states. Of 1439 biopsy specimens, endometriosis was proved in at least one specimen in 431 patients. Laparoscopy with visual diagnosis and staging of endometriosis followed by histopathologic analysis and confirmation. Operations were performed by five experienced laparoscopic gynecologists. Histopathologic confirmation of visual diagnosis of endometriosis adjusted for significant covariates. Endometriosis was accurately diagnosed in 49.7% of American Society for Reproductive Medicine (ASRM) stage I, which was significantly less accurate than for other stages of endometriosis. Deep endometriosis was more likely to be diagnosed accurately than superficial endometriosis (adjusted odds ratio, 2.51; 95% confidence interval, 1.50-4.18; p < .01). Lesion volume was also predictive, with larger lesions diagnosed more accurately than smaller lesions. In general, lesion site did not greatly influence accuracy except for superficial ovarian lesions, which were more likely to be incorrectly diagnosed visually as endometriosis (adjusted odds ratio, 0.16; 95% confidence interval, 0.06-0.41; p < .01). There was no statistically significant difference in accuracy between the gynecologic surgeons. The accuracy of visual diagnosis of endometriosis was substantially influenced by American Society of Reproductive Medicine stage, the depth and volume of the lesion, and to a lesser extent the location of the lesion. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Ortiz-Ruiz, Alejandra; Postigo, María; Gil-Casanova, Sara; Cuadrado, Daniel; Bautista, José M; Rubio, José Miguel; Luengo-Oroz, Miguel; Linares, María
2018-01-30
Routine field diagnosis of malaria is a considerable challenge in rural and low resources endemic areas mainly due to lack of personnel, training and sample processing capacity. In addition, differential diagnosis of Plasmodium species has a high level of misdiagnosis. Real time remote microscopical diagnosis through on-line crowdsourcing platforms could be converted into an agile network to support diagnosis-based treatment and malaria control in low resources areas. This study explores whether accurate Plasmodium species identification-a critical step during the diagnosis protocol in order to choose the appropriate medication-is possible through the information provided by non-trained on-line volunteers. 88 volunteers have performed a series of questionnaires over 110 images to differentiate species (Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, Plasmodium malariae, Plasmodium knowlesi) and parasite staging from thin blood smear images digitalized with a smartphone camera adapted to the ocular of a conventional light microscope. Visual cues evaluated in the surveys include texture and colour, parasite shape and red blood size. On-line volunteers are able to discriminate Plasmodium species (P. falciparum, P. malariae, P. vivax, P. ovale, P. knowlesi) and stages in thin-blood smears according to visual cues observed on digitalized images of parasitized red blood cells. Friendly textual descriptions of the visual cues and specialized malaria terminology is key for volunteers learning and efficiency. On-line volunteers with short-training are able to differentiate malaria parasite species and parasite stages from digitalized thin smears based on simple visual cues (shape, size, texture and colour). While the accuracy of a single on-line expert is far from perfect, a single parasite classification obtained by combining the opinions of multiple on-line volunteers over the same smear, could improve accuracy and reliability of Plasmodium species identification in remote malaria diagnosis.
NASA Astrophysics Data System (ADS)
Santosa, H.; Ernawati, J.; Wulandari, L. D.
2018-03-01
The visual aesthetic experience in urban spaces is important in establishing a comfortable and satisfying experience for the community. The embodiment of a good visual image of urban space will encourage the emergence of positive perceptions and meanings stimulating the community to produce a good reaction to its urban space. Moreover, to establish a Good Governance in urban planning and design, it is necessary to boost and promote a community participation in the process of controlling the visual quality of urban space through the visual quality evaluation on urban street corridors. This study is an early stage as part of the development of ‘Landscape Visual Planning System’ on the commercial street corridor in Malang. Accordingly, the research aims to evaluate the physical characteristics and the public preferences of the spatial and visual aspects in five provincial road corridors in Malang. This study employs a field survey methods, and an environmental aesthetics approach through semantic differential method. The result of the identification of physical characteristics and the assessment of public preferences on the spatial and visual aspects of the five provincial streets serve as the basis for constructing the 3d interactive simulation scenarios in the Landscape Visual Planning System.
Encapsulated social perception of emotional expressions.
Smortchkova, Joulia
2017-01-01
In this paper I argue that the detection of emotional expressions is, in its early stages, informationally encapsulated. I clarify and defend such a view via the appeal to data from social perception on the visual processing of faces, bodies, facial and bodily expressions. Encapsulated social perception might exist alongside processes that are cognitively penetrated, and that have to do with recognition and categorization, and play a central evolutionary function in preparing early and rapid responses to the emotional stimuli. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade
2013-05-01
As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.
Chang, Yu-Cherng C; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N; Hämäläinen, Matti S; Temereanca, Simona
2018-01-01
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Chang, Yu-Cherng C.; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N.; Hämäläinen, Matti S.; Temereanca, Simona
2018-01-01
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150–350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition. PMID:29867372
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
Beyond perceptual expertise: revisiting the neural substrates of expert object recognition
Harel, Assaf; Kravitz, Dwight; Baker, Chris I.
2013-01-01
Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134
NASA Technical Reports Server (NTRS)
Webb, W. B.
1972-01-01
Discussion of the electroencephalogram as the critical measurement procedure for sleep research, and survey of major findings that have emerged in the last decade on the presence of sleep within the twenty-four-hour cycle. Specifically, intrasleep processes, frequency of stage changes, sequence of stage events, sleep stage amounts, temporal patterns of sleep, and stability of intrasleep pattern in both man and lower animals are reviewed, along with some circadian aspects of sleep, temporal factors, and number of sleep episodes. It is felt that it is particularly critical to take the presence of sleep into account whenever performance is considered. When it is recognized that responsive performance is extremely limited during sleep, it is easy to visualize the extent to which performance is controlled by sleep itself.
Global motion perception is associated with motor function in 2-year-old children.
Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E
2017-09-29
The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.
The feeling of fluent perception: a single experience from multiple asynchronous sources.
Wurtz, Pascal; Reber, Rolf; Zimmermann, Thomas D
2008-03-01
Zeki and co-workers recently proposed that perception can best be described as locally distributed, asynchronous processes that each create a kind of microconsciousness, which condense into an experienced percept. The present article is aimed at extending this theory to metacognitive feelings. We present evidence that perceptual fluency-the subjective feeling of ease during perceptual processing-is based on speed of processing at different stages of the perceptual process. Specifically, detection of briefly presented stimuli was influenced by figure-ground contrast, but not by symmetry (Experiment 1) or the font (Experiment 2) of the stimuli. Conversely, discrimination of these stimuli was influenced by whether they were symmetric (Experiment 1) and by the font they were presented in (Experiment 2), but not by figure-ground contrast. Both tasks however were related with the subjective experience of fluency (Experiments 1 and 2). We conclude that subjective fluency is the conscious phenomenal correlate of different processing stages in visual perception.
Zhang, Xiao; Glennie, Craig L; Bucheli, Sibyl R; Lindgren, Natalie K; Lynne, Aaron M
2014-08-01
Decomposition can be a highly variable process with stages that are difficult to quantify. Using high accuracy terrestrial laser scanning a repeated three-dimensional (3D) documentation of volumetric changes of a human body during early decomposition is recorded. To determine temporal volumetric variations as well as 3D distribution of the changed locations in the body over time, this paper introduces the use of multiple degenerated cylinder models to provide a reasonable approximation of body parts against which 3D change can be measured and visualized. An iterative closest point algorithm is used for 3D registration, and a method for determining volumetric change is presented. Comparison of the laser scanning estimates of volumetric change shows good agreement with repeated in-situ measurements of abdomen and limb circumference that were taken diurnally. The 3D visualizations of volumetric changes demonstrate that bloat is a process with a beginning, middle, and end rather than a state of presence or absence. Additionally, the 3D visualizations show conclusively that cadaver bloat is not isolated to the abdominal cavity, but also occurs in the limbs. Detailed quantification of the bloat stage of decay has the potential to alter how the beginning and end of bloat are determined by researchers and can provide further insight into the effects of the ecosystem on decomposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Decoding visual object categories in early somatosensory cortex.
Smith, Fraser W; Goodale, Melvyn A
2015-04-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.
Decoding Visual Object Categories in Early Somatosensory Cortex
Smith, Fraser W.; Goodale, Melvyn A.
2015-01-01
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136
DIA2: Web-based Cyberinfrastructure for Visual Analysis of Funding Portfolios.
Madhavan, Krishna; Elmqvist, Niklas; Vorvoreanu, Mihaela; Chen, Xin; Wong, Yuetling; Xian, Hanjun; Dong, Zhihua; Johri, Aditya
2014-12-01
We present a design study of the Deep Insights Anywhere, Anytime (DIA2) platform, a web-based visual analytics system that allows program managers and academic staff at the U.S. National Science Foundation to search, view, and analyze their research funding portfolio. The goal of this system is to facilitate users' understanding of both past and currently active research awards in order to make more informed decisions of their future funding. This user group is characterized by high domain expertise yet not necessarily high literacy in visualization and visual analytics-they are essentially casual experts-and thus require careful visual and information design, including adhering to user experience standards, providing a self-instructive interface, and progressively refining visualizations to minimize complexity. We discuss the challenges of designing a system for casual experts and highlight how we addressed this issue by modeling the organizational structure and workflows of the NSF within our system. We discuss each stage of the design process, starting with formative interviews, prototypes, and finally live deployments and evaluation with stakeholders.
Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco
2011-04-26
Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.
Central Pain Processing in Early-Stage Parkinson's Disease: A Laser Pain fMRI Study
Petschow, Christine; Scheef, Lukas; Paus, Sebastian; Zimmermann, Nadine; Schild, Hans H.; Klockgether, Thomas; Boecker, Henning
2016-01-01
Background & Objective Pain is a common non-motor symptom in Parkinson’s disease. As dopaminergic dysfunction is suggested to affect intrinsic nociceptive processing, this study was designed to characterize laser-induced pain processing in early-stage Parkinson’s disease patients in the dopaminergic OFF state, using a multimodal experimental approach at behavioral, autonomic, imaging levels. Methods 13 right-handed early-stage Parkinson’s disease patients without cognitive or sensory impairment were investigated OFF medication, along with 13 age-matched healthy control subjects. Measurements included warmth perception thresholds, heat pain thresholds, and central pain processing with event-related functional magnetic resonance imaging (erfMRI) during laser-induced pain stimulation at lower (E = 440 mJ) and higher (E = 640 mJ) target energies. Additionally, electrodermal activity was characterized during delivery of 60 randomized pain stimuli ranging from 440 mJ to 640 mJ, along with evaluation of subjective pain ratings on a visual analogue scale. Results No significant differences in warmth perception thresholds, heat pain thresholds, electrodermal activity and subjective pain ratings were found between Parkinson’s disease patients and controls, and erfMRI revealed a generally comparable activation pattern induced by laser-pain stimuli in brain areas belonging to the central pain matrix. However, relatively reduced deactivation was found in Parkinson’s disease patients in posterior regions of the default mode network, notably the precuneus and the posterior cingulate cortex. Conclusion Our data during pain processing extend previous findings suggesting default mode network dysfunction in Parkinson’s disease. On the other hand, they argue against a genuine pain-specific processing abnormality in early-stage Parkinson’s disease. Future studies are now required using similar multimodal experimental designs to examine pain processing in more advanced stages of Parkinson’s disease. PMID:27776130
Andersen, Erica; Asuri, Namrata; Clay, Matthew; Halloran, Mary
2010-01-01
The zebrafish is an ideal model for imaging cell behaviors during development in vivo. Zebrafish embryos are externally fertilized and thus easily accessible at all stages of development. Moreover, their optical clarity allows high resolution imaging of cell and molecular dynamics in the natural environment of the intact embryo. We are using a live imaging approach to analyze cell behaviors during neural crest cell migration and the outgrowth and guidance of neuronal axons. Live imaging is particularly useful for understanding mechanisms that regulate cell motility processes. To visualize details of cell motility, such as protrusive activity and molecular dynamics, it is advantageous to label individual cells. In zebrafish, plasmid DNA injection yields a transient mosaic expression pattern and offers distinct benefits over other cell labeling methods. For example, transgenic lines often label entire cell populations and thus may obscure visualization of the fine protrusions (or changes in molecular distribution) in a single cell. In addition, injection of DNA at the one-cell stage is less invasive and more precise than dye injections at later stages. Here we describe a method for labeling individual developing neurons or neural crest cells and imaging their behavior in vivo. We inject plasmid DNA into 1-cell stage embryos, which results in mosaic transgene expression. The vectors contain cell-specific promoters that drive expression of a gene of interest in a subset of sensory neurons or neural crest cells. We provide examples of cells labeled with membrane targeted GFP or with a biosensor probe that allows visualization of F-actin in living cells1. Erica Andersen, Namrata Asuri, and Matthew Clay contributed equally to this work. PMID:20130524
Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim
2015-07-30
Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.
Serra, X; Grèbol, N; Guàrdia, M D; Guerrero, L; Gou, P; Masoliver, P; Gassiot, M; Sárraga, C; Monfort, J M; Arnau, J
2007-01-01
This paper describes the effect of high pressure (400MPa and 600MPa) applied to frozen hams at early stages of the dry-cured ham process: green hams (GH) and hams at the end of the resting stage (ERS), on the appearance, some texture and flavour parameters and on the instrumental colour characteristics of dry-cured hams. Pressurized hams showed slightly lower visual colour intensity than the control ones. In general, pressurization did not have a significant effect on the flavour characteristics of the final product. The 600-MPa hams from the ERS process showed significantly lower crumbliness and higher fibrousness scores than the control and the 400-MPa hams. However, none of these differences were enough to affect the overall sensory quality of the hams negatively. Regarding instrumental colour characteristics (L(∗)a(∗)b(∗)), an increase in lightness was observed in the biceps femoris muscle from GH hams pressurized at 400MPa and 600MPa but not in the ERS hams.
Emotion and attention: event-related brain potential studies.
Schupp, Harald T; Flaisch, Tobias; Stockburger, Jessica; Junghöfer, Markus
2006-01-01
Emotional pictures guide selective visual attention. A series of event-related brain potential (ERP) studies is reviewed demonstrating the consistent and robust modulation of specific ERP components by emotional images. Specifically, pictures depicting natural pleasant and unpleasant scenes are associated with an increased early posterior negativity, late positive potential, and sustained positive slow wave compared with neutral contents. These modulations are considered to index different stages of stimulus processing including perceptual encoding, stimulus representation in working memory, and elaborate stimulus evaluation. Furthermore, the review includes a discussion of studies exploring the interaction of motivated attention with passive and active forms of attentional control. Recent research is reviewed exploring the selective processing of emotional cues as a function of stimulus novelty, emotional prime pictures, learned stimulus significance, and in the context of explicit attention tasks. It is concluded that ERP measures are useful to assess the emotion-attention interface at the level of distinct processing stages. Results are discussed within the context of two-stage models of stimulus perception brought out by studies of attention, orienting, and learning.
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
2017-07-05
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Situated sentence processing: the coordinated interplay account and a neurobehavioral model.
Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R
2010-03-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.
A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period.
Marti, Sébastien; Sigman, Mariano; Dehaene, Stanislas
2012-02-01
Doing two things at once is difficult. When two tasks have to be performed within a short interval, the second is sharply delayed, an effect called the Psychological Refractory Period (PRP). Similarly, when two successive visual targets are briefly flashed, people may fail to detect the second target (Attentional Blink or AB). Although AB and PRP are typically studied in very different paradigms, a recent detailed neuromimetic model suggests that both might arise from the same serial stage during which stimuli gain access to consciousness and, as a result, can be arbitrarily routed to any other appropriate processor. Here, in agreement with this model, we demonstrate that AB and PRP can be obtained on alternate trials of the same cross-modal paradigm and result from limitations in the same brain mechanisms. We asked participants to respond as fast as possible to an auditory target T1 and then to a visual target T2 embedded in a series of distractors, while brain activity was recorded with magneto-encephalography (MEG). For identical stimuli, we observed a mixture of blinked trials, where T2 was entirely missed, and PRP trials, where T2 processing was delayed. MEG recordings showed that PRP and blinked trials underwent identical sensory processing in visual occipito-temporal cortices, even including the non-conscious separation of targets from distractors. However, late activations in frontal cortex (>350 ms), strongly influenced by the speed of task-1 execution, were delayed in PRP trials and absent in blinked trials. Our findings suggest that PRP and AB arise from similar cortical stages, can occur with the same exact stimuli, and are merely distinguished by trial-by-trial fluctuations in task processing. Copyright © 2011 Elsevier Inc. All rights reserved.
Simon, Anja; Bock, Otmar
2015-01-01
A new 3-stage model based on neuroimaging evidence is proposed by Chein and Schneider (2012). Each stage is associated with different brain regions, and draws on cognitive abilities: the first stage on creativity, the second on selective attention, and the third on automatic processing. The purpose of the present study was to scrutinize the validity of this model for 1 popular learning paradigm, visuomotor adaptation. Participants completed tests for creativity, selective attention and automated processing before attending in a pointing task with adaptation to a 60° rotation of visual feedback. To examine the relationship between cognitive abilities and motor learning at different times of practice, associations between cognitive and adaptation scores were calculated repeatedly throughout adaptation. The authors found no benefit of high creativity for adaptive performance. High levels of selective attention were positively associated with early adaptation, but hardly with late adaptation and de-adaptation. High levels of automated execution were beneficial for late adaptation, but hardly for early and de-adaptation. From this we conclude that Chein and Schneider's first learning stage is difficult to confirm by research on visuomotor adaptation, and that the other 2 learning stages rather relate to workaround strategies than to actual adaptive recalibration.
A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough
NASA Astrophysics Data System (ADS)
Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru
A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.
NMF-Based Image Quality Assessment Using Extreme Learning Machine.
Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun
2017-01-01
Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.
Puller, Christian; Rieke, Fred; Neitz, Jay; Neitz, Maureen
2015-01-01
At early stages of visual processing, receptive fields are typically described as subtending local regions of space and thus performing computations on a narrow spatial scale. Nevertheless, stimulation well outside of the classical receptive field can exert clear and significant effects on visual processing. Given the distances over which they occur, the retinal mechanisms responsible for these long-range effects would certainly require signal propagation via active membrane properties. Here the physiology of a wide-field amacrine cell—the wiry cell—in macaque monkey retina is explored, revealing receptive fields that represent a striking departure from the classic structure. A single wiry cell integrates signals over wide regions of retina, 5–10 times larger than the classic receptive fields of most retinal ganglion cells. Wiry cells integrate signals over space much more effectively than predicted from passive signal propagation, and spatial integration is strongly attenuated during blockade of NMDA spikes but integration is insensitive to blockade of NaV channels with TTX. Thus these cells appear well suited for contributing to the long-range interactions of visual signals that characterize many aspects of visual perception. PMID:26133804
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
How the bimodal format of presentation affects working memory: an overview.
Mastroberardino, Serena; Santangelo, Valerio; Botta, Fabiano; Marucci, Francesco S; Olivetti Belardinelli, Marta
2008-03-01
The best format in which information that has to be recalled is presented has been investigated in several studies, which focused on the impact of bimodal stimulation on working memory performance. An enhancement of participant's performance in terms of correct recall has been repeatedly found, when bimodal formats of presentation (i.e., audiovisual) were compared to unimodal formats (i.e, either visual or auditory), in providing implications for multimedial learning. Several theoretical frameworks have been suggested in order to account for the bimodal advantage, ranging from those emphasizing early stages of processing (such as automatic alerting effects or multisensory integration processes) to those centred on late stages of processing (as postulated by the dual coding theory). The aim of this paper is to review previous contributions to this topic, providing a comprehensive theoretical framework, which is updated by the latest empirical studies.
Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex
Lafer-Sousa, Rosa; Conway, Bevil R.
2014-01-01
Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314
Binding of motion and colour is early and automatic.
Blaser, Erik; Papathomas, Thomas; Vidnyánszky, Zoltán
2005-04-01
At what stages of the human visual hierarchy different features are bound together, and whether this binding requires attention, is still highly debated. We used a colour-contingent motion after-effect (CCMAE) to study the binding of colour and motion signals. The logic of our approach was as follows: if CCMAEs can be evoked by targeted adaptation of early motion processing stages, without allowing for feedback from higher motion integration stages, then this would support our hypothesis that colour and motion are bound automatically on the basis of spatiotemporally local information. Our results show for the first time that CCMAE's can be evoked by adaptation to a locally paired opposite-motion dot display, a stimulus that, importantly, is known to trigger direction-specific responses in the primary visual cortex yet results in strong inhibition of the directional responses in area MT of macaques as well as in area MT+ in humans and, indeed, is perceived only as motionless flicker. The magnitude of the CCMAE in the locally paired condition was not significantly different from control conditions where the different directions were spatiotemporally separated (i.e. not locally paired) and therefore perceived as two moving fields. These findings provide evidence that adaptation at an early, local motion stage, and only adaptation at this stage, underlies this CCMAE, which in turn implies that spatiotemporally coincident colour and motion signals are bound automatically, most probably as early as cortical area V1, even when the association between colour and motion is perceptually inaccessible.
78 FR 20667 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-05
..., et al. Visualization of biological texture using correlation coefficient images. J Biomed Opt. 2006.... Development Stage: Early-stage In vitro data available Inventors: Paolo Lusso and David J. Auerbach (NIAID... algorithms to visualize regions of statistical similarity in the image have been developed. Though the...
Predictive Coding: A Fresh View of Inhibition in the Retina
NASA Astrophysics Data System (ADS)
Srinivasan, M. V.; Laughlin, S. B.; Dubs, A.
1982-11-01
Interneurons exhibiting centre--surround antagonism within their receptive fields are commonly found in peripheral visual pathways. We propose that this organization enables the visual system to encode spatial detail in a manner that minimizes the deleterious effects of intrinsic noise, by exploiting the spatial correlation that exists within natural scenes. The antagonistic surround takes a weighted mean of the signals in neighbouring receptors to generate a statistical prediction of the signal at the centre. The predicted value is subtracted from the actual centre signal, thus minimizing the range of outputs transmitted by the centre. In this way the entire dynamic range of the interneuron can be devoted to encoding a small range of intensities, thus rendering fine detail detectable against intrinsic noise injected at later stages in processing. This predictive encoding scheme also reduces spatial redundancy, thereby enabling the array of interneurons to transmit a larger number of distinguishable images, taking into account the expected structure of the visual world. The profile of the required inhibitory field is derived from statistical estimation theory. This profile depends strongly upon the signal: noise ratio and weakly upon the extent of lateral spatial correlation. The receptive fields that are quantitatively predicted by the theory resemble those of X-type retinal ganglion cells and show that the inhibitory surround should become weaker and more diffuse at low intensities. The latter property is unequivocally demonstrated in the first-order interneurons of the fly's compound eye. The theory is extended to the time domain to account for the phasic responses of fly interneurons. These comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it. The treatment emphasizes that a neuron's dynamic range should be matched to both its receptive field and the statistical properties of the visual pattern expected within this field. Finally, the analysis is synthetic because it is an extension of the background suppression hypothesis (Barlow & Levick 1976), satisfies the redundancy reduction hypothesis (Barlow 1961 a, b) and is equivalent to deblurring under certain conditions (Ratliff 1965).
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas
2015-01-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas
2015-03-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.
Ebrahimi, Farideh; Mikaeili, Mohammad; Estrada, Edson; Nazeran, Homer
2008-01-01
Currently in the world there is an alarming number of people who suffer from sleep disorders. A number of biomedical signals, such as EEG, EMG, ECG and EOG are used in sleep labs among others for diagnosis and treatment of sleep related disorders. The usual method for sleep stage classification is visual inspection by a sleep specialist. This is a very time consuming and laborious exercise. Automatic sleep stage classification can facilitate this process. The definition of sleep stages and the sleep literature show that EEG signals are similar in Stage 1 of non-rapid eye movement (NREM) sleep and rapid eye movement (REM) sleep. Therefore, in this work an attempt was made to classify four sleep stages consisting of Awake, Stage 1 + REM, Stage 2 and Slow Wave Stage based on the EEG signal alone. Wavelet packet coefficients and artificial neural networks were deployed for this purpose. Seven all night recordings from Physionet database were used in the study. The results demonstrated that these four sleep stages could be automatically discriminated from each other with a specificity of 94.4 +/- 4.5%, a of sensitivity 84.2+3.9% and an accuracy of 93.0 +/- 4.0%.
Visual naming deficits in dyslexia: An ERP investigation of different processing domains.
Araújo, Susana; Faísca, Luís; Reis, Alexandra; Marques, J Frederico; Petersson, Karl Magnus
2016-10-01
Naming speed deficits are well documented in developmental dyslexia, expressed by slower naming times and more errors in response to familiar items. Here we used event-related potentials (ERPs) to examine at what processing level the deficits in dyslexia emerge during a discrete-naming task. Dyslexic and skilled adult control readers performed a primed object-naming task, in which the relationship between the prime and the target was manipulated along perceptual, semantic and phonological dimensions. A 3×2 design that crossed Relationship Type (Visual, Phonemic Onset, and Semantic) with Relatedness (Related and Unrelated) was used. An attenuated N/P190 - indexing early visual processing - and N300 - which index late visual processing - was observed to pictures preceded by perceptually related (vs. unrelated) primes in the control but not in the dyslexic group. These findings suggest suboptimal processing in early stages of object processing in dyslexia, when integration and mapping of perceptual information to a more form-specific percept in memory take place. On the other hand, both groups showed an N400 effect associated with semantically related pictures (vs. unrelated), taken to reflect intact integration of semantic similarities in both dyslexic and control readers. We also found an electrophysiological effect of phonological priming in the N400 range - that is, an attenuated N400 to objects preceded by phonemic related primes vs. unrelated - while it showed a more widespread distributed and more pronounced over the right hemisphere in the dyslexics. Topographic differences between groups might have originated from a word form encoding process with different characteristics in dyslexics compared to control readers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.
Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia
2018-01-01
It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.
Crowding with conjunctions of simple features.
Põder, Endel; Wagemans, Johan
2007-11-20
Several recent studies have related crowding with the feature integration stage in visual processing. In order to understand the mechanisms involved in this stage, it is important to use stimuli that have several features to integrate, and these features should be clearly defined and measurable. In this study, Gabor patches were used as target and distractor stimuli. The stimuli differed in three dimensions: spatial frequency, orientation, and color. A group of 3, 5, or 7 objects was presented briefly at 4 deg eccentricity of the visual field. The observers' task was to identify the object located in the center of the group. A strong effect of the number of distractors was observed, consistent with various spatial pooling models. The analysis of incorrect responses revealed that these were a mix of feature errors and mislocalizations of the target object. Feature errors were not purely random, but biased by the features of distractors. We propose a simple feature integration model that predicts most of the observed regularities.
Star formation history: Modeling of visual binaries
NASA Astrophysics Data System (ADS)
Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.
2018-05-01
Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.
Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang
2015-07-15
Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.
Changes in carbohydrate metabolism in coconut palms infected with the lethal yellowing phytoplasma.
Maust, B E; Espadas, F; Talavera, C; Aguilar, M; Santamaría, J M; Oropeza, C
2003-08-01
ABSTRACT Lethal yellowing (LY), a disease caused by a phytoplasma, is the most devastating disease affecting coconut (Cocos nucifera) in Mexico. Thousands of coconut palm trees have died on the Yucatan peninsula while plantations in Central America and on the Pacific coast of Mexico are severely threatened. Polymerase chain reaction assays enable identification of incubating palm trees (stage 0+, phytoplasma detected but palm asymptomatic). With the development of LY, palm trees exhibit various visual symptoms such as premature nut fall (stage 1), inflorescence necrosis (stages 2 to 3), leaf chlorosis and senescence (stages 4 to 6), and finally palm death. However, physiological changes occur in the leaves and roots prior to onset of visual symptoms. Stomatal conductance, photosynthesis, and root respiration decreased in stages 0+ to 6. The number of active photosystem II (PSII) reaction centers decreased during stage 2, but maximum quantum use efficiency of PSII remained similar until stage 3 before declining. Sugar and starch concentrations in intermediate leaves (leaf 14) and upper leaves (leaf 4) increased from stage 0- (healthy) to stages 2 to 4, while root carbohydrate concentrations decreased rapidly from stage 0- to stage 0+ (incubating phytoplasma). Although photosynthetic rates and root carbohydrate concentrations decreased, leaf carbohydrate concentrations increased, suggesting inhibition of sugar transport in the phloem leading to stress in sink tissues and development of visual symptoms of LY.
ERIC Educational Resources Information Center
Marcet, Ana; Perea, Manuel
2018-01-01
Previous research has shown that early in the word recognition process, there is some degree of uncertainty concerning letter identity and letter position. Here, we examined whether this uncertainty also extends to the mapping of letter features onto letters, as predicted by the Bayesian Reader (Norris & Kinoshita, 2012). Indeed, anecdotal…
The artificial retina for track reconstruction at the LHC crossing rate
NASA Astrophysics Data System (ADS)
Abba, A.; Bedeschi, F.; Citterio, M.; Caponio, F.; Cusimano, A.; Geraci, A.; Marino, P.; Morello, M. J.; Neri, N.; Punzi, G.; Piucci, A.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.
2016-04-01
We present the results of an R&D study for a specialized processor capable of precisely reconstructing events with hundreds of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus suitable for processing LHC events at the full crossing frequency. For this purpose we design and test a massively parallel pattern-recognition algorithm, inspired to the current understanding of the mechanisms adopted by the primary visual cortex of mammals in the early stages of visual-information processing. The detailed geometry and charged-particle's activity of a large tracking detector are simulated and used to assess the performance of the artificial retina algorithm. We find that high-quality tracking in large detectors is possible with sub-microsecond latencies when the algorithm is implemented in modern, high-speed, high-bandwidth FPGA devices.
Demonstration of three gorges archaeological relics based on 3D-visualization technology
NASA Astrophysics Data System (ADS)
Xu, Wenli
2015-12-01
This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.
Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.
Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui
2015-09-01
Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay
2010-01-01
In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.
FPGA implementation of image dehazing algorithm for real time applications
NASA Astrophysics Data System (ADS)
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Dysfunctional visual word form processing in progressive alexia
Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.
2013-01-01
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694
Dysfunctional visual word form processing in progressive alexia.
Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M
2013-04-01
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.
Reconstruction dynamics of recorded holograms in photochromic glass.
Mihailescu, Mona; Pavel, Eugen; Nicolae, Vasile B
2011-06-20
We have investigated the dynamics of the record-erase process of holograms in photochromic glass using continuum Nd:YVO₄ laser radiation (λ=532 nm). A bidimensional microgrid pattern was formed and visualized in photochromic glass, and its diffraction efficiency decay versus time (during reconstruction step) gave us information (D, Δn) about the diffusion process inside the material. The recording and reconstruction processes were carried out in an off-axis setup, and the images of the reconstructed object were recorded by a CCD camera. Measurements realized on reconstructed object images using holograms recorded at a different incident power laser have shown a two-stage process involved in silver atom kinetics.
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan
2018-04-11
Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.
Top-down control of visual perception: attention in natural vision.
Rolls, Edmund T
2008-01-01
Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
Hübner, Ronald; Volberg, Gregor
2005-06-01
This article presents and tests the authors' integration hypothesis of global/local processing, which proposes that at early stages of processing, the identities of global and local units of a hierarchical stimulus are represented separately from information about their respective levels and that, therefore, identity and level information have to be integrated at later stages. It further states that the cerebral hemispheres differ in their capacities for these binding processes. Three experiments are reported in which the integration hypothesis was tested. Participants had to identify a letter at a prespecified level with the viewing duration restricted by a mask. False reporting of the letter at the nontarget level was predicted to occur more often when the integration of identity and level could fail. This was the case. Moreover, visual-field effects occurred, as expected. Finally, a multinomial model was constructed and fitted to the data. ((c) 2005 APA, all rights reserved).
Real-time simulation of the retina allowing visualization of each processing stage
NASA Astrophysics Data System (ADS)
Teeters, Jeffrey L.; Werblin, Frank S.
1991-08-01
The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.
Emergence of artistic talent in frontotemporal dementia.
Miller, B L; Cummings, J; Mishkin, F; Boone, K; Prince, F; Ponton, M; Cotman, C
1998-10-01
To describe the clinical, neuropsychological, and imaging features of five patients with frontotemporal dementia (FTD) who acquired new artistic skills in the setting of dementia. Creativity in the setting of dementia has recently been reported. We describe five patients who became visual artists in the setting of FTD. Sixty-nine FTD patients were interviewed regarding visual abilities. Five became artists in the early stages of FTD. Their history, artistic process, neuropsychology, and anatomy are described. On SPECT or pathology, four of the five patients had the temporal variant of FTD in which anterior temporal lobes are involved but the dorsolateral frontal cortex is spared. Visual skills were spared but language and social skills were devastated. Loss of function in the anterior temporal lobes may lead to the "facilitation" of artistic skills. Patients with the temporal lobe variant of FTD offer a window into creativity.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Disturbed temporal dynamics of brain synchronization in vision loss.
Bola, Michał; Gall, Carolin; Sabel, Bernhard A
2015-06-01
Damage along the visual pathway prevents bottom-up visual input from reaching further processing stages and consequently leads to loss of vision. But perception is not a simple bottom-up process - rather it emerges from activity of widespread cortical networks which coordinate visual processing in space and time. Here we set out to study how vision loss affects activity of brain visual networks and how networks' activity is related to perception. Specifically, we focused on studying temporal patterns of brain activity. To this end, resting-state eyes-closed EEG was recorded from partially blind patients suffering from chronic retina and/or optic-nerve damage (n = 19) and healthy controls (n = 13). Amplitude (power) of oscillatory activity and phase locking value (PLV) were used as measures of local and distant synchronization, respectively. Synchronization time series were created for the low- (7-9 Hz) and high-alpha band (11-13 Hz) and analyzed with three measures of temporal patterns: (i) length of synchronized-/desynchronized-periods, (ii) Higuchi Fractal Dimension (HFD), and (iii) Detrended Fluctuation Analysis (DFA). We revealed that patients exhibit less complex, more random and noise-like temporal dynamics of high-alpha band activity. More random temporal patterns were associated with worse performance in static (r = -.54, p = .017) and kinetic perimetry (r = .47, p = .041). We conclude that disturbed temporal patterns of neural synchronization in vision loss patients indicate disrupted communication within brain visual networks caused by prolonged deafferentation. We propose that because the state of brain networks is essential for normal perception, impaired brain synchronization in patients with vision loss might aggravate the functional consequences of reduced visual input. Copyright © 2015 Elsevier Ltd. All rights reserved.
Effects of speaker emotional facial expression and listener age on incremental sentence processing.
Carminati, Maria Nella; Knoeferle, Pia
2013-01-01
We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age = 23) and older (N = 32, Mean age = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing.
Differential encoding of spatial information among retinal on cone bipolar cells
Purgert, Robert J.
2015-01-01
The retina is the first stage of visual processing. It encodes elemental features of visual scenes. Distinct cone bipolar cells provide the substrate for this to occur. They encode visual information, such as color and luminance, a principle known as parallel processing. Few studies have directly examined whether different forms of spatial information are processed in parallel among cone bipolar cells. To address this issue, we examined the spatial information encoded by mouse ON cone bipolar cells, the subpopulation excited by increments in illumination. Two types of spatial processing were identified. We found that ON cone bipolar cells with axons ramifying in the central inner plexiform layer were tuned to preferentially encode small stimuli. By contrast, ON cone bipolar cells with axons ramifying in the proximal inner plexiform layer, nearest the ganglion cell layer, were tuned to encode both small and large stimuli. This dichotomy in spatial tuning is attributable to amacrine cells providing stronger inhibition to central ON cone bipolar cells compared with proximal ON cone bipolar cells. Furthermore, background illumination altered this difference in spatial tuning. It became less pronounced in bright light, as amacrine cell-driven inhibition became pervasive among all ON cone bipolar cells. These results suggest that differential amacrine cell input determined the distinct spatial encoding properties among ON cone bipolar cells. These findings enhance the known parallel processing capacity of the retina. PMID:26203104
Orthographic Coding: Brain Activation for Letters, Symbols, and Digits.
Carreiras, Manuel; Quiñones, Ileana; Hernández-Cabrera, Juan Andrés; Duñabeitia, Jon Andoni
2015-12-01
The present experiment investigates the input coding mechanisms of 3 common printed characters: letters, numbers, and symbols. Despite research in this area, it is yet unclear whether the identity of these 3 elements is processed through the same or different brain pathways. In addition, some computational models propose that the position-in-string coding of these elements responds to general flexible mechanisms of the visual system that are not character-specific, whereas others suggest that the position coding of letters responds to specific processes that are different from those that guide the position-in-string assignment of other types of visual objects. Here, in an fMRI study, we manipulated character position and character identity through the transposition or substitution of 2 internal elements within strings of 4 elements. Participants were presented with 2 consecutive visual strings and asked to decide whether they were the same or different. The results showed: 1) that some brain areas responded more to letters than to numbers and vice versa, suggesting that processing may follow different brain pathways; 2) that the left parietal cortex is involved in letter identity, and critically in letter position coding, specifically contributing to the early stages of the reading process; and that 3) a stimulus-specific mechanism for letter position coding is operating during orthographic processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Preserved subliminal processing and impaired conscious access in schizophrenia
Del Cul, Antoine; Dehaene, Stanislas; Leboyer, Marion
2006-01-01
Background Studies of visual backward masking have frequently revealed an elevated masking threshold in schizophrenia. This finding has frequently been interpreted as indicating a low-level visual deficit. However, more recent models suggest that masking may also involve late and higher-level integrative processes, while leaving intact early “bottom-up” visual processing. Objectives We tested the hypothesis that the backward masking deficit in schizophrenia corresponds to a deficit in the late stages of conscious perception, whereas the subliminal processing of masked stimuli is fully preserved. Method 28 patients with schizophrenia and 28 normal controls performed two backward-masking experiments. We used Arabic digits as stimuli and varied quasi-continuously the interval with a subsequent mask, thus allowing us to progressively “unmask” the stimuli. We finely quantified their degree of visibility using both objective and subjective measures to evaluate the threshold duration for access to consciousness. We also studied the priming effect caused by the variably masked numbers on a comparison task performed on a subsequently presented and highly visible target number. Results The threshold delay between digit and mask necessary for the conscious perception of the masked stimulus was longer in patients compared to control subjects. This higher consciousness threshold in patients was confirmed by an objective and a subjective measure, and both measures were highly correlated for patients as well as for controls. However, subliminal priming of masked numbers was effective and identical in patients compared to controls. Conclusions Access to conscious report of masked stimuli is impaired in schizophrenia, while fast bottom-up processing of the same stimuli, as assessed by subliminal priming, is preserved. These findings suggest a high-level origin of the masking deficit in schizophrenia, although they leave open for further research its exact relation to previously identified bottom-up visual processing abnormalities. PMID:17146006
Becoming Theatrical: Performing Narrative Research, Staging Visual Representation
ERIC Educational Resources Information Center
Valle, Jan W.; Connor, David J.
2012-01-01
This article describes a collaborative project among the author of a book about mothers and special education (based on a collection of oral narratives of mothers who represent diverse generations, races, and social classes), a playwright, and an artist. Together, they created a theatrical and visual staging of the author's narrative research. The…
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli
Kamke, Marc R.; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.
Kamke, Marc R; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.
Visual aided pacing in respiratory maneuvers
NASA Astrophysics Data System (ADS)
Rambaudi, L. R.; Rossi, E.; Mántaras, M. C.; Perrone, M. S.; Siri, L. Nicola
2007-11-01
A visual aid to pace self-controlled respiratory cycles in humans is presented. Respiratory manoeuvres need to be accomplished in several clinic and research procedures, among others, the studies on Heart Rate Variability. Free running respiration turns to be difficult to correlate with other physiologic variables. Because of this fact, voluntary self-control is asked from the individuals under study. Currently, an acoustic metronome is used to pace respiratory frequency, its main limitation being the impossibility to induce predetermined timing in the stages within the respiratory cycle. In the present work, visual driven self-control was provided, with separate timing for the four stages of a normal respiratory cycle. This visual metronome (ViMet) was based on a microcontroller which power-ON and -OFF an eight-LED bar, in a four-stage respiratory cycle time series handset by the operator. The precise timing is also exhibited on an alphanumeric display.
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Two critical periods in early visual cortex during figure-ground segregation.
Wokke, Martijn E; Sligte, Ilja G; Steven Scholte, H; Lamme, Victor A F
2012-11-01
The ability to distinguish a figure from its background is crucial for visual perception. To date, it remains unresolved where and how in the visual system different stages of figure-ground segregation emerge. Neural correlates of figure border detection have consistently been found in early visual cortex (V1/V2). However, areas V1/V2 have also been frequently associated with later stages of figure-ground segregation (such as border ownership or surface segregation). To causally link activity in early visual cortex to different stages of figure-ground segregation, we briefly disrupted activity in areas V1/V2 at various moments in time using transcranial magnetic stimulation (TMS). Prior to stimulation we presented stimuli that made it possible to differentiate between figure border detection and surface segregation. We concurrently recorded electroencephalographic (EEG) signals to examine how neural correlates of figure-ground segregation were affected by TMS. Results show that disruption of V1/V2 in an early time window (96-119 msec) affected detection of figure stimuli and affected neural correlates of figure border detection, border ownership, and surface segregation. TMS applied in a relatively late time window (236-259 msec) selectively deteriorated performance associated with surface segregation. We conclude that areas V1/V2 are not only essential in an early stage of figure-ground segregation when figure borders are detected, but subsequently causally contribute to more sophisticated stages of figure-ground segregation such as surface segregation.
Two critical periods in early visual cortex during figure–ground segregation
Wokke, Martijn E; Sligte, Ilja G; Steven Scholte, H; Lamme, Victor A F
2012-01-01
The ability to distinguish a figure from its background is crucial for visual perception. To date, it remains unresolved where and how in the visual system different stages of figure–ground segregation emerge. Neural correlates of figure border detection have consistently been found in early visual cortex (V1/V2). However, areas V1/V2 have also been frequently associated with later stages of figure–ground segregation (such as border ownership or surface segregation). To causally link activity in early visual cortex to different stages of figure–ground segregation, we briefly disrupted activity in areas V1/V2 at various moments in time using transcranial magnetic stimulation (TMS). Prior to stimulation we presented stimuli that made it possible to differentiate between figure border detection and surface segregation. We concurrently recorded electroencephalographic (EEG) signals to examine how neural correlates of figure–ground segregation were affected by TMS. Results show that disruption of V1/V2 in an early time window (96–119 msec) affected detection of figure stimuli and affected neural correlates of figure border detection, border ownership, and surface segregation. TMS applied in a relatively late time window (236–259 msec) selectively deteriorated performance associated with surface segregation. We conclude that areas V1/V2 are not only essential in an early stage of figure–ground segregation when figure borders are detected, but subsequently causally contribute to more sophisticated stages of figure–ground segregation such as surface segregation. PMID:23170239
NASA Astrophysics Data System (ADS)
Wang, Y.; Soga, K.; DeJong, J. T.; Kabla, A.
2017-12-01
Microbial-induced carbonate precipitation (MICP), one of the bio-mineralization processes, is an innovative subsurface improvement technique for enhancing the strength and stiffness of soils, and controlling their hydraulic conductivity. These macro-scale engineering properties of MICP treated soils controlled by micro-scale factors of the precipitated carbonate, such as its content, amount and distribution in the soil matrix. The precipitation process itself is affected by bacteria amount, reaction kinetics, porous medium geometry and flow distribution in the soils. Accordingly, to better understand the MICP process at the pore scale a new experimental technique that can observe the entire process of MICP at the pore-scale was developed. In this study, a 2-D transparent microfluidic chip made of Polydimethylsiloxane (PDMS) representing the soil matrix was designed and fabricated. A staged-injection MICP treatment procedure was simulated inside the microfluidic chip while continuously monitored using microscopic techniques. The staged-injection MICP treatment procedure started with the injection of bacteria suspension, followed with the bacteria setting for attachment, and then ended with the multiple injections of cementation liquid. The main MICP processes visualized during this procedure included the bacteria transport and attachment during the bacteria injection, the bacteria attachment and growth during the bacteria settling, the bacteria detachment during the cementation liquid injection, the cementation development during the cementation liquid injection, and the cementation development after the completion of cementation liquid injection. It is suggested that the visualization of the main MICP processes using the microfluidic technique can improve understating of the fundamental mechanisms of MICP and consequently help improve the treatment technique for in situ implementation of MICP.
Expression Atlas: gene and protein expression across multiple studies and organisms
Tang, Y Amy; Bazant, Wojciech; Burke, Melissa; Fuentes, Alfonso Muñoz-Pomer; George, Nancy; Koskinen, Satu; Mohammed, Suhaib; Geniza, Matthew; Preece, Justin; Jarnuczak, Andrew F; Huber, Wolfgang; Stegle, Oliver; Brazma, Alvis; Petryszak, Robert
2018-01-01
Abstract Expression Atlas (http://www.ebi.ac.uk/gxa) is an added value database that provides information about gene and protein expression in different species and contexts, such as tissue, developmental stage, disease or cell type. The available public and controlled access data sets from different sources are curated and re-analysed using standardized, open source pipelines and made available for queries, download and visualization. As of August 2017, Expression Atlas holds data from 3,126 studies across 33 different species, including 731 from plants. Data from large-scale RNA sequencing studies including Blueprint, PCAWG, ENCODE, GTEx and HipSci can be visualized next to each other. In Expression Atlas, users can query genes or gene-sets of interest and explore their expression across or within species, tissues, developmental stages in a constitutive or differential context, representing the effects of diseases, conditions or experimental interventions. All processed data matrices are available for direct download in tab-delimited format or as R-data. In addition to the web interface, data sets can now be searched and downloaded through the Expression Atlas R package. Novel features and visualizations include the on-the-fly analysis of gene set overlaps and the option to view gene co-expression in experiments investigating constitutive gene expression across tissues or other conditions. PMID:29165655
Attentional Selection in Object Recognition
1993-02-01
order. It also affects the choice of strategies in both the 24 A Computational Model of Attentional Selection filtering and arbiter stages. The set...such processing. In Treisman’s model this was hidden in the concept of the selection filter . Later computational models of attention tried to...This thesis presents a novel approach to the selection problem by propos. ing a computational model of visual attentional selection as a paradigm for
On pleasure and thrill: the interplay between arousal and valence during visual word recognition.
Recio, Guillermo; Conrad, Markus; Hansen, Laura B; Jacobs, Arthur M
2014-07-01
We investigated the interplay between arousal and valence in the early processing of affective words. Event-related potentials (ERPs) were recorded while participants read words organized in an orthogonal design with the factors valence (positive, negative, neutral) and arousal (low, medium, high) in a lexical decision task. We observed faster reaction times for words of positive valence and for those of high arousal. Data from ERPs showed increased early posterior negativity (EPN) suggesting improved visual processing of these conditions. Valence effects appeared for medium and low arousal and were absent for high arousal. Arousal effects were obtained for neutral and negative words but were absent for positive words. These results suggest independent contributions of arousal and valence at early attentional stages of processing. Arousal effects preceded valence effects in the ERP data suggesting that arousal serves as an early alert system preparing a subsequent evaluation in terms of valence. Copyright © 2014 Elsevier Inc. All rights reserved.
Learned value and object perception: Accelerated perception or biased decisions?
Rajsic, Jason; Perera, Harendri; Pratt, Jay
2017-02-01
Learned value is known to bias visual search toward valued stimuli. However, some uncertainty exists regarding the stage of visual processing that is modulated by learned value. Here, we directly tested the effect of learned value on preattentive processing using temporal order judgments. Across four experiments, we imbued some stimuli with high value and some with low value, using a nonmonetary reward task. In Experiment 1, we replicated the value-driven distraction effect, validating our nonmonetary reward task. Experiment 2 showed that high-value stimuli, but not low-value stimuli, exhibit a prior-entry effect. Experiment 3, which reversed the temporal order judgment task (i.e., reporting which stimulus came second), showed no prior-entry effect, indicating that although a response bias may be present for high-value stimuli, they are still reported as appearing earlier. However, Experiment 4, using a simultaneity judgment task, showed no shift in temporal perception. Overall, our results support the conclusion that learned value biases perceptual decisions about valued stimuli without speeding preattentive stimulus processing.
Read-out of emotional information from iconic memory: the longevity of threatening stimuli.
Kuhbandner, Christof; Spitzer, Bernhard; Pekrun, Reinhard
2011-05-01
Previous research has shown that emotional stimuli are more likely than neutral stimuli to be selected by attention, indicating that the processing of emotional information is prioritized. In this study, we examined whether the emotional significance of stimuli influences visual processing already at the level of transient storage of incoming information in iconic memory, before attentional selection takes place. We used a typical iconic memory task in which the delay of a poststimulus cue, indicating which of several visual stimuli has to be reported, was varied. Performance decreased rapidly with increasing cue delay, reflecting the fast decay of information stored in iconic memory. However, although neutral stimulus information and emotional stimulus information were initially equally likely to enter iconic memory, the subsequent decay of the initially stored information was slowed for threatening stimuli, a result indicating that fear-relevant information has prolonged availability for read-out from iconic memory. This finding provides the first evidence that emotional significance already facilitates stimulus processing at the stage of iconic memory.
NASA Astrophysics Data System (ADS)
Brattico, Elvira; Brattico, Pauli; Vuust, Peter
2017-07-01
In their target article published in this journal issue, Pelowski et al. [1] address the question of how humans experience, and respond to, visual art. They propose a multi-layered model of the representations and processes involved in assessing visual art objects that, furthermore, involves both bottom-up and top-down elements. Their model provides predictions for seven different outcomes of human aesthetic experience, based on few distinct features (schema congruence, self-relevance, and coping necessity), and connects the underlying processing stages to ;specific correlates of the brain; (a similar attempt was previously done for music by [2-4]). In doing this, the model aims to account for the (often profound) experience of an individual viewer in front of an art object.
Attentional gating models of object substitution masking.
Põder, Endel
2013-11-01
Di Lollo, Enns, and Rensink (2000) proposed the computational model of object substitution (CMOS) to explain their experimental results with sparse visual maskers. This model supposedly is based on reentrant hypotheses testing in the visual system, and the modeled experiments are believed to demonstrate these reentrant processes in human vision. In this study, I analyze the main assumptions of this model. I argue that CMOS is a version of the attentional gating model and that its relationship with reentrant processing is rather illusory. The fit of this model to the data indicates that reentrant hypotheses testing is not necessary for the explanation of object substitution masking (OSM). Further, the original CMOS cannot predict some important aspects of the experimental data. I test 2 new models incorporating an unselective processing (divided attention) stage; these models are more consistent with data from OSM experiments. My modeling shows that the apparent complexity of OSM can be reduced to a few simple and well-known mechanisms of perception and memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Coactivation of response initiation processes with redundant signals.
Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N
2018-05-14
During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.
Multi-stage robust scheme for citrus identification from high resolution airborne images
NASA Astrophysics Data System (ADS)
Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier
2008-10-01
Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.
Analysis of early thrombus dynamics in a humanized mouse laser injury model.
Wang, Weiwei; Lindsey, John P; Chen, Jianchun; Diacovo, Thomas G; King, Michael R
2014-01-01
Platelet aggregation and thrombus formation at the site of injury is a dynamic process that involves the continuous addition of new platelets as well as thrombus rupture. In the early stages of hemostasis (within minutes after vessel injury) this process can be visualized by transfusing fluorescently labeled human platelets and observing their deposition and detachment. These two counterbalancing events help the developing thrombus reach a steady-state morphology, where it is large enough to cover the injured vessel surface but not too large to form a severe thrombotic occlusion. In this study, the spatial and temporal aspects of early stage thrombus dynamics which result from laser-induced injury on arterioles of cremaster muscle in the humanized mouse were visualized using fluorescent microscopy. It was found that rolling platelets show preference for the upstream region while tethering/detaching platelets were primarily found downstream. It was also determined that the platelet deposition rate is relatively steady, whereas the effective thrombus coverage area does not increase at a constant rate. By introducing a new method to graphically represent the real time in vivo physiological shear stress environment, we conclude that the thrombus continuously changes shape by regional growth and decay, and neither dominates in the high shear stress region.
FGF /FGFR Signal Induces Trachea Extension in the Drosophila Visual System
Chu, Wei-Chen; Lee, Yuan-Ming; Henry Sun, Yi
2013-01-01
The Drosophila compound eye is a large sensory organ that places a high demand on oxygen supplied by the tracheal system. Although the development and function of the Drosophila visual system has been extensively studied, the development and contribution of its tracheal system has not been systematically examined. To address this issue, we studied the tracheal patterns and developmental process in the Drosophila visual system. We found that the retinal tracheae are derived from air sacs in the head, and the ingrowth of retinal trachea begin at mid-pupal stage. The tracheal development has three stages. First, the air sacs form near the optic lobe in 42-47% of pupal development (pd). Second, in 47-52% pd, air sacs extend branches along the base of the retina following a posterior-to-anterior direction and further form the tracheal network under the fenestrated membrane (TNUFM). Third, the TNUFM extend fine branches into the retina following a proximal-to-distal direction after 60% pd. Furthermore, we found that the trachea extension in both retina and TNUFM are dependent on the FGF(Bnl)/FGFR(Btl) signaling. Our results also provided strong evidence that the photoreceptors are the source of the Bnl ligand to guide the trachea ingrowth. Our work is the first systematic study of the tracheal development in the visual system, and also the first study demonstrating the interactions of two well-studied systems: the eye and trachea. PMID:23991208
Sex Differences in Response to Visual Sexual Stimuli: A Review
Rupp, Heather A.; Wallen, Kim
2009-01-01
This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311
SnapShot: Visualization to Propel Ice Hockey Analytics.
Pileggi, H; Stolper, C D; Boyle, J M; Stasko, J T
2012-12-01
Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel.
The Early Stage of Neutron Tomography for Cultural Heritage Study in Thailand
NASA Astrophysics Data System (ADS)
Khaweerat, S.; Ratanatongchai, W.; S. Wonglee; Schillinger, B.
In parallel to the upgrade of neutron imaging facility at TRR-1/M1 since 2015, the practice on image processing software has led to implementation of neutron tomography (NT). The current setup provides a thermal neutron flux of 1.08×106 cm-2sec-1 at the exposure position. In general, the sample was fixed on a plate at the top of rotary stage controlled by Labview 2009 Version 9.0.1. The incremental step can be adjusted from 0.45 to 7.2 degree. A 16 bit CCD camera assembled with a Nikkor 50 mm f/1.2 lens was used to record light from 6LiF/ZnS (green) neutron converter screen. The exposure time for each shot was 60 seconds, resulting in the acquisition time of approximately three hours for completely turning the sample around. Afterwards, the batch of two dimensional neutron images of the sample was read into the reconstruction and visualization software Octopus reconstruction 8.8 and Octopus visualization 2.0, respectively. The results revealed that the system alignment is important. Maintaining the stability of heavy sample at every particular angle of rotation is important. Previous alignment showed instability of the supporting plane while tilting the sample. This study showed that the sample stage should be replaced. Even though the NT is a lengthy process and involves large data processing, it offers an opportunity to better understand features of an object in more details than with neutron radiography. The digital NT also allows us to separate inner features that appear superpositioned in radiography by cross-sectioning the 3D data set of an object without destruction. As a result, NT is a significant tool for revealing hidden information included in the inner structure of cultural heritage objects, providing great benefits in archaeological study, conservation process and authenticity investigating.
The practice of agent-based model visualization.
Dorin, Alan; Geard, Nicholas
2014-01-01
We discuss approaches to agent-based model visualization. Agent-based modeling has its own requirements for visualization, some shared with other forms of simulation software, and some unique to this approach. In particular, agent-based models are typified by complexity, dynamism, nonequilibrium and transient behavior, heterogeneity, and a researcher's interest in both individual- and aggregate-level behavior. These are all traits requiring careful consideration in the design, experimentation, and communication of results. In the case of all but final communication for dissemination, researchers may not make their visualizations public. Hence, the knowledge of how to visualize during these earlier stages is unavailable to the research community in a readily accessible form. Here we explore means by which all phases of agent-based modeling can benefit from visualization, and we provide examples from the available literature and online sources to illustrate key stages and techniques.
Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T
2014-07-01
We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language background and/or usage. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sundaram, Thirunavukkarasu; Jeong, Gwang-Woo; Kim, Tae-Hoon; Kim, Gwang-Won; Baek, Han-Su; Kang, Heoung-Keun
2010-01-01
To assess the dynamic activations of the key brain areas associated with the time-course of the sexual arousal evoked by visual sexual stimuli in healthy male subjects. Fourteen right-handed heterosexual male volunteers participated in this study. Alternatively combined rest period and erotic video visual stimulation were used according to the standard block design. In order to illustrate and quantify the spatiotemporal activation patterns of the key brain regions, the activation period was divided into three different stages as the EARLY, MID and LATE stages. For the group result (p < 0.05), when comparing the MID stage with the EARLY stage, a significant increase of the brain activation was observed in the areas that included the inferior frontal gyrus, the supplementary motor area, the hippocampus, the head of the caudate nucleus, the midbrain, the superior occipital gyrus and the fusiform gyrus. At the same time, when comparing the EARLY stage with the MID stage, the putamen, the globus pallidus, the pons, the thalamus, the hypothalamus, the lingual gyrus and the cuneus yielded significantly increased activations. When comparing the LATE stage with the MID stage, all the above mentioned brain regions showed elevated activations except the hippocampus. Our results illustrate the spatiotemporal activation patterns of the key brain regions across the three stages of visual sexual arousal.
Sundaram, Thirunavukkarasu; Kim, Tae-Hoon; Kim, Gwang-Won; Baek, Han-Su; Kang, Heoung-Keun
2010-01-01
Objective To assess the dynamic activations of the key brain areas associated with the time-course of the sexual arousal evoked by visual sexual stimuli in healthy male subjects. Materials and Methods Fourteen right-handed heterosexual male volunteers participated in this study. Alternatively combined rest period and erotic video visual stimulation were used according to the standard block design. In order to illustrate and quantify the spatiotemporal activation patterns of the key brain regions, the activation period was divided into three different stages as the EARLY, MID and LATE stages. Results For the group result (p < 0.05), when comparing the MID stage with the EARLY stage, a significant increase of the brain activation was observed in the areas that included the inferior frontal gyrus, the supplementary motor area, the hippocampus, the head of the caudate nucleus, the midbrain, the superior occipital gyrus and the fusiform gyrus. At the same time, when comparing the EARLY stage with the MID stage, the putamen, the globus pallidus, the pons, the thalamus, the hypothalamus, the lingual gyrus and the cuneus yielded significantly increased activations. When comparing the LATE stage with the MID stage, all the above mentioned brain regions showed elevated activations except the hippocampus. Conclusion Our results illustrate the spatiotemporal activation patterns of the key brain regions across the three stages of visual sexual arousal. PMID:20461181
Learning to Recognize Patterns: Changes in the Visual Field with Familiarity
NASA Astrophysics Data System (ADS)
Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo
1995-01-01
Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].
Cognitive penetration of early vision in face perception.
Cecchi, Ariel S
2018-06-13
Cognitive and affective penetration of perception refers to the influence that higher mental states such as beliefs and emotions have on perceptual systems. Psychological and neuroscientific studies appear to show that these states modulate the visual system at the visuomotor, attentional, and late levels of processing. However, empirical evidence showing that similar consequences occur in early stages of visual processing seems to be scarce. In this paper, I argue that psychological evidence does not seem to be either sufficient or necessary to argue in favour of or against the cognitive penetration of perception in either late or early vision. In order to do that we need to have recourse to brain imaging techniques. Thus, I introduce a neuroscientific study and argue that it seems to provide well-grounded evidence for the cognitive penetration of early vision in face perception. I also examine and reject alternative explanations to my conclusion. Copyright © 2018 Elsevier Inc. All rights reserved.
Perceptual learning: toward a comprehensive theory.
Watanabe, Takeo; Sasaki, Yuka
2015-01-03
Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.
ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.
Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study
Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng
2013-01-01
Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235
Stage II Chronic Maxillary Atelectasis Associated with Subclinical Visual Field Defect.
Mangussi-Gomes, João; Nakanishi, Márcio; Chalita, Maria Regina; Damasco, Fabiana; De Oliveira, Carlos Augusto Costa Pires
2013-10-01
Introduction Chronic maxillary atelectasis (CMA) is characterized by a persistent decrease in the maxillary sinus volume due to inward bowing of its walls. According to its severity, it may be classified into three clinical-radiological stages. Objective To report a case of stage II CMA associated with subclinical visual field defect. Case Report A 34-year-old woman presented with a 15-year history of recurrent episodes of sinusitis and intermittent right facial discomfort for the past 5 years. She denied visual complaints, and no facial deformities were observed on physical examination. Paranasal sinus computed tomography (CT) demonstrated a completely opacified right maxillary sinus with inward bowing of its walls, suggesting the diagnosis of stage II CMA. A computerized campimetry (CC) disclosed a scotoma adjacent to the blind spot of the right eye, indicating a possible damage to the optic nerve. The patient was submitted to functional endoscopic sinus surgery, with drainage of a thick mucous fluid from the sinus. She did well after surgery and has been asymptomatic since then. Postoperative CT was satisfactory and CC was normal. Discussion CMA occurs because of a persistent ostiomeatal obstruction, which creates negative pressure inside the sinus. It is associated with nasosinusal symptoms but had never been described in association with any visual field defect. It can be divided into stage I (membranous deformity), stage II (bony deformity), and stage III (clinical deformity). The silent sinus syndrome is a special form of CMA. This term should only be used to describe those cases with spontaneous enophthalmos, hypoglobus, and/or midfacial deformity in the absence of nasosinusal symptoms.
Selective weighting of action-related feature dimensions in visual working memory.
Heuer, Anna; Schubö, Anna
2017-08-01
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.
Zakharova, I A; Avdeev, R V; Pristavka, V A; Surnin, S N; Makhmutov, V Yu; Savrasova, I I
to investigate neuromidin effectiveness in the treatment of patients with primary glaucoma and compensated intraocular pressure (IOP). A total of 40 patients (80 eyes) were examined. Of them, 10 eyes with early glaucoma, 36 eyes with moderate-stage glaucoma, 33 eyes with advanced glaucoma, and 1 eye with end-stage glaucoma. In 19 eyes, IOP was controlled through beta-blockers, in 11 eyes - through carbonic anhydrase inhibitors, in 10 eyes - through prostaglandin analogues, and in 39 eyes - through combination drugs. Twenty-six eyes had received glaucoma surgery some time earlier. Ipidacrine was prescribed in tablets at 20 mg 2 times daily for 25 days. Treatment effectiveness was judged by visual functions, hydrodynamics, and morphometric parameters of the optic disc. In moderate-stage eyes, visual acuity improved in 66.6% of cases and remained unchanged in 33.3%. In advanced-stage eyes, visual acuity improved in 51.5% of cases and remained unchanged in 48.5%. Visual field broadened in all cases. Moreover, under the neuromidin therapy, the number of scotomas in early-stage eyes decreased, while the number of areas with normal sensitivity of the retina increased by 14.9%. In advanced-stage glaucoma, the effect was less pronounced: the number of type 1 and type 2 scotomas decreased by 3.0±0.6% and 2.9±0.8%, respectively; the number of absolute scotomas did not change; the number of areas with normal sensitivity of the retina increased by 7.4±2.0%. Also, P0 was found to be reduced and intraocular fluid outflow - activated. In early and moderate glaucoma, there was a significant reduction in the cup area as well as an increase in the neuroretinal rim area and retinal nerve fiber layer thickness. In advanced-stage cases, it was only the retinal nerve fiber layer thickness that changed. Neuromidin has a positive impact on visual function, hydrodynamics, and morphometric parameters of the optic disc.
Visual representation of spatiotemporal structure
NASA Astrophysics Data System (ADS)
Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.
1998-07-01
The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.
The time course of morphological processing during spoken word recognition in Chinese.
Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan
2017-12-01
We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.
Bio-inspired approach to multistage image processing
NASA Astrophysics Data System (ADS)
Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan
2017-08-01
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
Differential effect of visual motion adaption upon visual cortical excitability.
Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer
2017-03-01
The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.
The Face-Processing Network Is Resilient to Focal Resection of Human Visual Cortex
Jonas, Jacques; Gomez, Jesse; Maillard, Louis; Brissart, Hélène; Hossu, Gabriela; Jacques, Corentin; Loftus, David; Colnat-Coulbois, Sophie; Stigliani, Anthony; Barnett, Michael A.; Grill-Spector, Kalanit; Rossion, Bruno
2016-01-01
Human face perception requires a network of brain regions distributed throughout the occipital and temporal lobes with a right hemisphere advantage. Present theories consider this network as either a processing hierarchy beginning with the inferior occipital gyrus (occipital face area; IOG-faces/OFA) or a multiple-route network with nonhierarchical components. The former predicts that removing IOG-faces/OFA will detrimentally affect downstream stages, whereas the latter does not. We tested this prediction in a human patient (Patient S.P.) requiring removal of the right inferior occipital cortex, including IOG-faces/OFA. We acquired multiple fMRI measurements in Patient S.P. before and after a preplanned surgery and multiple measurements in typical controls, enabling both within-subject/across-session comparisons (Patient S.P. before resection vs Patient S.P. after resection) and between-subject/across-session comparisons (Patient S.P. vs controls). We found that the spatial topology and selectivity of downstream ipsilateral face-selective regions were stable 1 and 8 month(s) after surgery. Additionally, the reliability of distributed patterns of face selectivity in Patient S.P. before versus after resection was not different from across-session reliability in controls. Nevertheless, postoperatively, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1 of the resected hemisphere. Diffusion weighted imaging in Patient S.P. and controls identifies white matter tracts connecting retinotopic areas to downstream face-selective regions, which may contribute to the stable and plastic features of the face network in Patient S.P. after surgery. Together, our results support a multiple-route network of face processing with nonhierarchical components and shed light on stable and plastic features of high-level visual cortex following focal brain damage. SIGNIFICANCE STATEMENT Brain networks consist of interconnected functional regions commonly organized in processing hierarchies. Prevailing theories predict that damage to the input of the hierarchy will detrimentally affect later stages. We tested this prediction with multiple brain measurements in a rare human patient requiring surgical removal of the putative input to a network processing faces. Surprisingly, the spatial topology and selectivity of downstream face-selective regions are stable after surgery. Nevertheless, representations of visual space were typical in dorsal face-selective regions but atypical in ventral face-selective regions and V1. White matter connections from outside the face network may support these stable and plastic features. As processing hierarchies are ubiquitous in biological and nonbiological systems, our results have pervasive implications for understanding the construction of resilient networks. PMID:27511014
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
Maeda, Tatsuro; Shiraga, Seizaburo; Araki, Tetsuya; Ueda, Mitsuyoshi; Yamada, Masaharu; Takeya, Koji; Sagara, Yasuyuki
2009-07-01
Cell-surface engineering (Ueda et al., 2000) has been applied to develop a novel technique to visualize yeast in bread dough. Enhanced green fluorescent protein (EGFP) was bonded to the surface of yeast cells, and 0.5% EGFP yeasts were mixed into the dough samples at four different mixing stages. The samples were placed on a cryostat at -30 degrees C and sliced at 10 microm. The sliced samples were observed at an excitation wavelength of 480 nm and a fluorescent wavelength of 520 nm. The results indicated that the combination of the EGFP-displayed yeasts, rapid freezing, and cryo-sectioning made it possible to visualize 2-D distribution of yeast in bread dough to the extent that the EGFP yeasts could be clearly distinguished from the auto-fluorescent background of bread dough.
He, Xun; Witzel, Christoph; Forder, Lewis; Clifford, Alexandra; Franklin, Anna
2014-04-01
Prior claims that color categories affect color perception are confounded by inequalities in the color space used to equate same- and different-category colors. Here, we equate same- and different-category colors in the number of just-noticeable differences, and measure event-related potentials (ERPs) to these colors on a visual oddball task to establish if color categories affect perceptual or post-perceptual stages of processing. Category effects were found from 200 ms after color presentation, only in ERP components that reflect post-perceptual processes (e.g., N2, P3). The findings suggest that color categories affect post-perceptual processing, but do not affect the perceptual representation of color.
DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide
2013-01-01
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
Feldman, Tatiana B; Yakovleva, Marina A; Larichev, Andrey V; Arbukhanova, Patimat M; Radchenko, Alexandra Sh; Borzenok, Sergey A; Kuzmin, Vladimir A; Ostrovsky, Mikhail A
2018-05-22
The aim of this work is the determination of quantitative diagnostic criteria based on the spectral characteristics of fundus autofluorescence to detect early stages of degeneration in the retina and retinal pigment epithelium (RPE). RPE cell suspension samples were obtained from the cadaver eyes with and without signs of age-related macular degeneration (AMD). Fluorescence analysis at an excitation wavelength of 488 nm was performed. The fluorescence lifetimes of lipofuscin-granule fluorophores were measured by counting time-correlated photon method. Comparative analysis of fluorescence spectra of RPE cell suspensions from the cadaver eyes with and without signs of AMD showed a significant difference in fluorescence intensity at 530-580 nm in response to fluorescence excitation at 488 nm. It was notably higher in eyes with visual pathology than in normal eyes regardless of the age of the eye donor. Measurements of fluorescence lifetimes of lipofuscin fluorophores showed that the contribution of photooxidation and photodegradation products of bisretinoids to the total fluorescence at 530-580 nm of RPE cell suspensions was greater in eyes with visual pathology than in normal eyes. Because photooxidation and photodegradation products of bisretinoids are markers of photodestructive processes, which can cause RPE cell death and initiate degenerative processes in the retina, quantitative determination of increases in these bisretinoid products in lipofuscin granules may be used to establish quantitative diagnostic criteria for degenerative processes in the retina and RPE.
Emotion Modulation of Visual Attention: Categorical and Temporal Characteristics
Ciesielski, Bethany G.; Armstrong, Thomas; Zald, David H.; Olatunji, Bunmi O.
2010-01-01
Background Experimental research has shown that emotional stimuli can either enhance or impair attentional performance. However, the relative effects of specific emotional stimuli and the specific time course of these differential effects are unclear. Methodology/Principal Findings In the present study, participants (n = 50) searched for a single target within a rapid serial visual presentation of images. Irrelevant fear, disgust, erotic or neutral images preceded the target by two, four, six, or eight items. At lag 2, erotic images induced the greatest deficits in subsequent target processing compared to other images, consistent with a large emotional attentional blink. Fear and disgust images also produced a larger attentional blinks at lag 2 than neutral images. Erotic, fear, and disgust images continued to induce greater deficits than neutral images at lag 4 and 6. However, target processing deficits induced by erotic, fear, and disgust images at intermediate lags (lag 4 and 6) did not consistently differ from each other. In contrast to performance at lag 2, 4, and 6, enhancement in target processing for emotional stimuli was observed in comparison to neutral stimuli at lag 8. Conclusions/Significance These findings suggest that task-irrelevant emotion information, particularly erotica, impairs intentional allocation of attention at early temporal stages, but at later temporal stages, emotional stimuli can have an enhancing effect on directed attention. These data suggest that the effects of emotional stimuli on attention can be both positive and negative depending upon temporal factors. PMID:21079773
Molecular magnetic resonance imaging of atherosclerotic vessel wall disease.
Nörenberg, Dominik; Ebersberger, Hans U; Diederichs, Gerd; Hamm, Bernd; Botnar, René M; Makowski, Marcus R
2016-03-01
Molecular imaging aims to improve the identification and characterization of pathological processes in vivo by visualizing the underlying biological mechanisms. Molecular imaging techniques are increasingly used to assess vascular inflammation, remodeling, cell migration, angioneogenesis and apoptosis. In cardiovascular diseases, molecular magnetic resonance imaging (MRI) offers new insights into the in vivo biology of pathological vessel wall processes of the coronary and carotid arteries and the aorta. This includes detection of early vascular changes preceding plaque development, visualization of unstable plaques and assessment of response to therapy. The current review focuses on recent developments in the field of molecular MRI to characterise different stages of atherosclerotic vessel wall disease. A variety of molecular MR-probes have been developed to improve the non-invasive detection and characterization of atherosclerotic plaques. Specifically targeted molecular probes allow for the visualization of key biological steps in the cascade leading to the development of arterial vessel wall lesions. Early detection of processes which lead to the development of atherosclerosis and the identification of vulnerable atherosclerotic plaques may enable the early assessment of response to therapy, improve therapy planning, foster the prevention of cardiovascular events and may open the door for the development of patient-specific treatment strategies. Targeted MR-probes allow the characterization of atherosclerosis on a molecular level. Molecular MRI can identify in vivo markers for the differentiation of stable and unstable plaques. Visualization of early molecular changes has the potential to improve patient-individualized risk-assessment.
Electrical localization of weakly electric fish using neural networks
NASA Astrophysics Data System (ADS)
Kiar, Greg; Mamatjan, Yasin; Jun, James; Maler, Len; Adler, Andy
2013-04-01
Weakly Electric Fish (WEF) emit an Electric Organ Discharge (EOD), which travels through the surrounding water and enables WEF to locate nearby objects or to communicate between individuals. Previous tracking of WEF has been conducted using infrared (IR) cameras and subsequent image processing. The limitation of visual tracking is its relatively low frame-rate and lack of reliability when visually obstructed. Thus, there is a need for reliable monitoring of WEF location and behaviour. The objective of this study is to provide an alternative and non-invasive means of tracking WEF in real-time using neural networks (NN). This study was carried out in three stages. First stage was to recreate voltage distributions by simulating the WEF using EIDORS and finite element method (FEM) modelling. Second stage was to validate the model using phantom data acquired from an Electrical Impedance Tomography (EIT) based system, including a phantom fish and tank. In the third stage, the measurement data was acquired using a restrained WEF within a tank. We trained the NN based on the voltage distributions for different locations of the WEF. With networks trained on the acquired data, we tracked new locations of the WEF and observed the movement patterns. The results showed a strong correlation between expected and calculated values of WEF position in one dimension, yielding a high spatial resolution within 1 cm and 10 times higher temporal resolution than IR cameras. Thus, the developed approach could be used as a practical method to non-invasively monitor the WEF in real-time.
The changing landscape of functional brain networks for face processing in typical development.
Joseph, Jane E; Swearingen, Joshua E; Clark, Jonathan D; Benca, Chelsie E; Collins, Heather R; Corbly, Christine R; Gathers, Ann D; Bhatt, Ramesh S
2012-11-15
Greater expertise for faces in adults than in children may be achieved by a dynamic interplay of functional segregation and integration of brain regions throughout development. The present study examined developmental changes in face network functional connectivity in children (5-12 years) and adults (18-43 years) during face-viewing using a graph-theory approach. A face-specific developmental change involved connectivity of the right occipital face area. During childhood, this node increased in strength and within-module clustering based on positive connectivity. These changes reflect an important role of the ROFA in segregation of function during childhood. In addition, strength and diversity of connections within a module that included primary visual areas (left and right calcarine) and limbic regions (left hippocampus and right inferior orbitofrontal cortex) increased from childhood to adulthood, reflecting increased visuo-limbic integration. This integration was pronounced for faces but also emerged for natural objects. Taken together, the primary face-specific developmental changes involved segregation of a posterior visual module during childhood, possibly implicated in early stage perceptual face processing, and greater integration of visuo-limbic connections from childhood to adulthood, which may reflect processing related to development of perceptual expertise for individuation of faces and other visually homogenous categories. Copyright © 2012 Elsevier Inc. All rights reserved.
Smith, Philip L; Sewell, David K
2013-07-01
We generalize the integrated system model of Smith and Ratcliff (2009) to obtain a new theory of attentional selection in brief, multielement visual displays. The theory proposes that attentional selection occurs via competitive interactions among detectors that signal the presence of task-relevant features at particular display locations. The outcome of the competition, together with attention, determines which stimuli are selected into visual short-term memory (VSTM). Decisions about the contents of VSTM are made by a diffusion-process decision stage. The selection process is modeled by coupled systems of shunting equations, which perform gated where-on-what pathway VSTM selection. The theory provides a computational account of key findings from attention tasks with near-threshold stimuli. These are (a) the success of the MAX model of visual search and spatial cuing, (b) the distractor homogeneity effect, (c) the double-target detection deficit, (d) redundancy costs in the post-stimulus probe task, (e) the joint item and information capacity limits of VSTM, and (f) the object-based nature of attentional selection. We argue that these phenomena are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of our theory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
The Lingering Effects of an Artificial Blind Spot
Morgan, Michael J.; McEwan, William; Solomon, Joshua
2007-01-01
Background When steady fixation is maintained on the centre of a large patch of texture, holes in the periphery of the texture rapidly fade from awareness, producing artificial scotomata (i.e., invisible areas of reduced vision, like the natural ‘blind spot’). There has been considerable controversy about whether this apparent ‘filling in’ depends on a low-level or high-level visual process. Evidence for an active process is that when the texture around the scotomata is suddenly removed, phantasms of the texture appear within the previous scotomata. Methodology To see if these phantasms were equivalent to real low-level signals, we measured contrast discrimination for real dynamic texture patches presented on top of the phantasms. Principal Findings Phantasm intensity varied with adapting contrast. Contrast discrimination depended on both (real) pedestal contrast and phantasm intensity, in a manner indicative of a common sensory threshold. The phantasms showed inter-ocular transfer, proving that their effects are cortical rather than retinal. Conclusions We show that this effect is consistent with a tonic spreading of the adapting texture into the scotomata, coupled with some overall loss of sensitivity. Our results support the view that ‘filling in’ happens at an early stage of visual processing, quite possibly in primary visual cortex (V1). PMID:17327917
NASA Astrophysics Data System (ADS)
Jiang, Minghui; Wang, Qing; Lei, Kai; Wang, Yang; Liu, Bo; Song, Zhitang
2016-10-01
The Femtosecond laser pulse induced phase transition dynamics of Cr-doped Sb2Te1 films was studied by real-time reflectivity measurements with a pump-probe system. It was found that crystallization of the as-deposited CrxSb2Te1 phase-change thin films exhibits a multi-stage process lasting for about 40ns.The time required for the multi-stage process seems to be not related to the contents of Cr element. The durations of the crystallization and amorphization processes are approximately the same. Doping Cr into Sb2Te1 thin film can improve its photo-thermal stability without obvious change in the crystallization rate. Optical images and image intensity cross sections are used to visualize the transformed regions. This work may provide further insight into the phase-change mechanism of CrxSb2Te1 under extra-non-equilibrium conditions and aid to develop new ultrafast phase-change memory materials.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Identification and Cloning of the Wnt-1 Receptor
1996-10-01
examination of embryos with duplicated axes revealed that Xwnt-5A and hFz5 induced a full array of dorsal tissues, including notochord , neural tube...tube, a notochord and somites in both axes. c). Xwnt-5A plus hfz5 induce ectopic goosecoid (gsc) expression in stage 11 embryos, as visualized by whole...Lai CJ, Olson DJ, Kelly GM: Dissecting Wnt signalling pathways and Wnt-sensitive developmental processes through transient misexpression analyses in
Long-term academic stress enhances early processing of facial expressions.
Zhang, Liang; Qin, Shaozheng; Yao, Zhuxi; Zhang, Kan; Wu, Jianhui
2016-11-01
Exposure to long-term stress can lead to a variety of emotional and behavioral problems. Although widely investigated, the neural basis of how long-term stress impacts emotional processing in humans remains largely elusive. Using event-related brain potentials (ERPs), we investigated the effects of long-term stress on the neural dynamics of emotionally facial expression processing. Thirty-nine male college students undergoing preparation for a major examination and twenty-one matched controls performed a gender discrimination task for faces displaying angry, happy, and neutral expressions. The results of the Perceived Stress Scale showed that participants in the stress group perceived higher levels of long-term stress relative to the control group. ERP analyses revealed differential effects of long-term stress on two early stages of facial expression processing: 1) long-term stress generally augmented posterior P1 amplitudes to facial stimuli irrespective of expression valence, suggesting that stress can increase sensitization to visual inputs in general, and 2) long-term stress selectively augmented fronto-central P2 amplitudes for angry but not for neutral or positive facial expressions, suggesting that stress may lead to increased attentional prioritization to processing negative emotional stimuli. Together, our findings suggest that long-term stress has profound impacts on the early stages of facial expression processing, with an increase at the very early stage of general information inputs and a subsequent attentional bias toward processing emotionally negative stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Klein, Fabian; Iffland, Benjamin; Schindler, Sebastian; Wabnitz, Pascal; Neuner, Frank
2015-12-01
Recent studies have shown that the perceptual processing of human faces is affected by context information, such as previous experiences and information about the person represented by the face. The present study investigated the impact of verbally presented information about the person that varied with respect to affect (neutral, physically threatening, socially threatening) and reference (self-referred, other-referred) on the processing of faces with an inherently neutral expression. Stimuli were presented in a randomized presentation paradigm. Event-related potential (ERP) analysis demonstrated a modulation of the evoked potentials by reference at the EPN (early posterior negativity) and LPP (late positive potential) stage and an enhancing effect of affective valence on the LPP (700-1000 ms) with socially threatening context information leading to the most pronounced LPP amplitudes. We also found an interaction between reference and valence with self-related neutral context information leading to more pronounced LPP than other related neutral context information. Our results indicate an impact of self-reference on early, presumably automatic processing stages and also a strong impact of valence on later stages. Using a randomized presentation paradigm, this study confirms that context information affects the visual processing of faces, ruling out possible confounding factors such as facial configuration or conditional learning effects.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Females scan more than males: a potential mechanism for sex differences in recognition memory.
Heisz, Jennifer J; Pottruff, Molly M; Shore, David I
2013-07-01
Recognition-memory tests reveal individual differences in episodic memory; however, by themselves, these tests provide little information regarding the stage (or stages) in memory processing at which differences are manifested. We used eye-tracking technology, together with a recognition paradigm, to achieve a more detailed analysis of visual processing during encoding and retrieval. Although this approach may be useful for assessing differences in memory across many different populations, we focused on sex differences in face memory. Females outperformed males on recognition-memory tests, and this advantage was directly related to females' scanning behavior at encoding. Moreover, additional exposures to the faces reduced sex differences in face recognition, which suggests that males may be able to improve their recognition memory by extracting more information at encoding through increased scanning. A strategy of increased scanning at encoding may prove to be a simple way to enhance memory performance in other populations with memory impairment.
Combined contributions of feedforward and feedback inputs to bottom-up attention
Khorsand, Peyman; Moore, Tirin; Soltani, Alireza
2015-01-01
In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883
Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel
2011-09-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.
Yang, Zhou; Jackson, Todd; Gao, Xiao; Chen, Hong
2012-08-01
This research examined selective biases in visual attention related to fear of pain by tracking eye movements (EM) toward pain-related stimuli among the pain-fearful. EM of 21 young adults scoring high on a fear of pain measure (H-FOP) and 20 lower-scoring (L-FOP) control participants were measured during a dot-probe task that featured sensory pain-neutral, health catastrophe-neutral and neutral-neutral word pairs. Analyses indicated that the H-FOP group was more likely to direct immediate visual attention toward sensory pain and health catastrophe words than was the L-FOP group. The H-FOP group also had comparatively shorter first fixation latencies toward sensory pain and health catastrophe words. Conversely, groups did not differ on EM indices of attentional maintenance (i.e., first fixation duration, gaze duration, and average fixation duration) or reaction times to dot probes. Finally, both groups showed a cycle of disengagement followed by re-engagement toward sensory pain words relative to other word types. In sum, this research is the first to reveal biases toward pain stimuli during very early stages of visual information processing among the highly pain-fearful and highlights the utility of EM tracking as a means to evaluate visual attention as a dynamic process in the context of FOP. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
A Human Factors Framework for Payload Display Design
NASA Technical Reports Server (NTRS)
Dunn, Mariea C.; Hutchinson, Sonya L.
1998-01-01
During missions to space, one charge of the astronaut crew is to conduct research experiments. These experiments, referred to as payloads, typically are controlled by computers. Crewmembers interact with payload computers by using visual interfaces or displays. To enhance the safety, productivity, and efficiency of crewmember interaction with payload displays, particular attention must be paid to the usability of these displays. Enhancing display usability requires adoption of a design process that incorporates human factors engineering principles at each stage. This paper presents a proposed framework for incorporating human factors engineering principles into the payload display design process.
Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.
Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.
2017-01-01
Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346
Translating the Verbal to the Visual
ERIC Educational Resources Information Center
Engbers, Susanna Kelly
2012-01-01
Communication has always been at least partly a visual experience--insofar as the speaker's appearance on a stage or the text's appearance on the page. Certainly, however, the experience is becoming more and more visual. Thus, equipping students with the tools necessary to analyze and evaluate the visual rhetoric that surrounds everyone is a task…
ERIC Educational Resources Information Center
Cole, Charles; Mandelblatt, Bertie; Stevenson, John
2002-01-01
Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…
García-Rodríguez, Beatriz; Guillén, Carmen Casares; Barba, Rosa Jurado; io Valladolid, Gabriel Rub; Arjona, José Antonio Molina; Ellgring, Heiner
2012-02-15
There is evidence that visuo-spatial capacity can become overloaded when processing a secondary visual task (Dual Task, DT), as occurs in daily life. Hence, we investigated the influence of the visuo-spatial interference in the identification of emotional facial expressions (EFEs) in early stages of Parkinson's disease (PD). We compared the identification of 24 emotional faces that illustrate six basic emotions in, unmedicated recently diagnosed PD patients (16) and healthy adults (20), under two different conditions: a) simple EFE identification, and b) identification with a concurrent visuo-spatial task (Corsi Blocks). EFE identification by PD patients was significantly worse than that of healthy adults when combined with another visual stimulus. Published by Elsevier B.V.
Early and late beta-band power reflect audiovisual perception in the McGurk illusion
Senkowski, Daniel; Keil, Julian
2015-01-01
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13–30 Hz) at short (0–500 ms) and long (500–800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. PMID:25568160
Early and late beta-band power reflect audiovisual perception in the McGurk illusion.
Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian
2015-04-01
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Beyeler, Michael; Rokem, Ariel; Boynton, Geoffrey M.; Fine, Ione
2017-10-01
The ‘bionic eye’—so long a dream of the future—is finally becoming a reality with retinal prostheses available to patients in both the US and Europe. However, clinical experience with these implants has made it apparent that the visual information provided by these devices differs substantially from normal sight. Consequently, the ability of patients to learn to make use of this abnormal retinal input plays a critical role in whether or not some functional vision is successfully regained. The goal of the present review is to summarize the vast basic science literature on developmental and adult cortical plasticity with an emphasis on how this literature might relate to the field of prosthetic vision. We begin with describing the distortion and information loss likely to be experienced by visual prosthesis users. We then define cortical plasticity and perceptual learning, and describe what is known, and what is unknown, about visual plasticity across the hierarchy of brain regions involved in visual processing, and across different stages of life. We close by discussing what is known about brain plasticity in sight restoration patients and discuss biological mechanisms that might eventually be harnessed to improve visual learning in these patients.
Heuer, Anna; Schubö, Anna
2016-01-01
Visual working memory can be modulated according to changes in the cued task relevance of maintained items. Here, we investigated the mechanisms underlying this modulation. In particular, we studied the consequences of attentional selection for selected and unselected items, and the role of individual differences in the efficiency with which attention is deployed. To this end, performance in a visual working memory task as well as the CDA/SPCN and the N2pc, ERP components associated with visual working memory and attentional processes, were analysed. Selection during the maintenance stage was manipulated by means of two successively presented retrocues providing spatial information as to which items were most likely to be tested. Results show that attentional selection serves to robustly protect relevant representations in the focus of attention while unselected representations which may become relevant again still remain available. Individuals with larger retrocueing benefits showed higher efficiency of attentional selection, as indicated by the N2pc, and showed stronger maintenance-associated activity (CDA/SPCN). The findings add to converging evidence that focused representations are protected, and highlight the flexibility of visual working memory, in which information can be weighted according its relevance.
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-01-01
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586
Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-
Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2016-01-01
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631
Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees
2014-05-05
Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.
Imitation Learning Errors Are Affected by Visual Cues in Both Performance and Observation Phases.
Mizuguchi, Takashi; Sugimura, Ryoko; Shimada, Hideaki; Hasegawa, Takehiro
2017-08-01
Mechanisms of action imitation were examined. Previous studies have suggested that success or failure of imitation is determined at the point of observing an action. In other words, cognitive processing after observation is not related to the success of imitation; 20 university students participated in each of three experiments in which they observed a series of object manipulations consisting of four elements (hands, tools, object, and end points) and then imitated the manipulations. In Experiment 1, a specific intially observed element was color coded, and the specific manipulated object at the imitation stage was identically color coded; participants accurately imitated the color coded element. In Experiment 2, a specific element was color coded at the observation but not at the imitation stage, and there were no effects of color coding on imitation. In Experiment 3, participants were verbally instructed to attend to a specific element at the imitation stage, but the verbal instructions had no effect. Thus, the success of imitation may not be determined at the stage of observing an action and color coding can provide a clue for imitation at the imitation stage.
Mädebach, Andreas; Markuske, Anna-Maria; Jescheniak, Jörg D
2018-05-22
Picture naming takes longer in the presence of socially inappropriate (taboo) distractor words compared with neutral distractor words. Previous studies have attributed this taboo interference effect to increased attentional capture by taboo words or verbal self-monitoring-that is, control processes scrutinizing verbal responses before articulation. In this study, we investigated the cause and locus of the taboo interference effect by contrasting three tasks that used the same target pictures, but systematically differed with respect to the processing stages involved: picture naming (requiring conceptual processing, lexical processing, and articulation), phoneme decision (requiring conceptual and lexical processing), and natural size decision (requiring conceptual processing only). We observed taboo interference in picture naming and phoneme decision. In size decision, taboo interference was not reliably observed under the same task conditions in which the effect arose in picture naming and phoneme decision, but it emerged when the difficulty of the size decision task was increased by visually degrading the target pictures. Overall, these results suggest that taboo interference cannot be exclusively attributed to verbal self-monitoring operating over articulatory responses. Instead, taboo interference appears to arise already prior to articulatory preparation, during lexical processing and-at least with sufficiently high task difficulty-during prelexical processing stages.
Martens, Ulla; Hübner, Ronald
2013-03-01
While hemispheric differences in global/local processing have been reported by various studies, it is still under dispute at which processing stage they occur. Primarily, it was assumed that these asymmetries originate from an early perceptual stage. Instead, the content-level binding theory (Hübner & Volberg, 2005) suggests that the hemispheres differ at a later stage at which the stimulus information is bound to its respective level. The present study tested this assumption by means of steady-state evoked potentials (SSVEPs). In particular, we presented hierarchical letters flickering at 12Hz while participants categorised the letters at a pre- cued level (global or local). The information at the two levels could be congruent or incongruent with respect to the required response. Since content-binding is only necessary if there is a response conflict, asymmetric hemispheric processing should be observed only for incongruent stimuli. Indeed, our results show that the cue and congruent stimuli elicited equal SSVEP global/local effects in both hemispheres. In contrast, incongruent stimuli elicited lower SSVEP amplitudes for a local than for a global target level at left posterior electrodes, whereas a reversed pattern was seen at right hemispheric electrodes. These findings provide further evidence for a level-specific hemispheric advantage with respect to content-level binding. Moreover, the fact that the SSVEP is sensitive to these processes offers the possibility to separately track global and local processing by presenting both level contents with different frequencies. Copyright © 2012 Elsevier Inc. All rights reserved.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
Seeing double: visual physiology of double-retina eye ontogeny in stomatopod crustaceans.
Feller, Kathryn D; Cohen, Jonathan H; Cronin, Thomas W
2015-03-01
Stomatopod eye development is unusual among crustaceans. Just prior to metamorphosis, an adult retina and associated neuro-processing structures emerge adjacent to the existing material in the larval compound eye. Depending on the species, the duration of this double-retina eye can range from a few hours to several days. Although this developmental process occurs in all stomatopod species observed to date, the retinal physiology and extent to which each retina contributes to the animal's visual sensitivity during this transition phase is unknown. We investigated the visual physiology of stomatopod double retinas using microspectrophotometry and electroretinogram recordings from different developmental stages of the Western Atlantic species Squilla empusa. Though microspectrophotometry data were inconclusive, we found robust ERG responses in both larval and adult retinas at all sampled time points indicating that the adult retina responds to light from the very onset of its emergence. We also found evidence of an increase in the response dynamics with ontogeny as well as an increase in sensitivity of retinal tissue during the double-retina phase relative to single retinas. These data provide an initial investigation into the ontogeny of vision during stomatopod double-retina eye development.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
Synaptic noise is an information bottleneck in the inner retina during dynamic visual stimulation
Freed, Michael A; Liang, Zhiyin
2014-01-01
In daylight, noise generated by cones determines the fidelity with which visual signals are initially encoded. Subsequent stages of visual processing require synapses from bipolar cells to ganglion cells, but whether these synapses generate a significant amount of noise was unknown. To characterize noise generated by these synapses, we recorded excitatory postsynaptic currents from mammalian retinal ganglion cells and subjected them to a computational noise analysis. The release of transmitter quanta at bipolar cell synapses contributed substantially to the noise variance found in the ganglion cell, causing a significant loss of fidelity from bipolar cell array to postsynaptic ganglion cell. Virtually all the remaining noise variance originated in the presynaptic circuit. Circuit noise had a frequency content similar to noise shared by ganglion cells but a very different frequency content from noise from bipolar cell synapses, indicating that these synapses constitute a source of independent noise not shared by ganglion cells. These findings contribute a picture of daylight retinal circuits where noise from cones and noise generated by synaptic transmission of cone signals significantly limit visual fidelity. PMID:24297850
Do we understand high-level vision?
Cox, David Daniel
2014-04-01
'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision. Copyright © 2014. Published by Elsevier Ltd.
The Precedence of Global Features in the Perception of Map Symbols
1988-06-01
be continually updated. The present study evaluated the feasibility of a serial model of visual processing. By comparing performance between a symbol...symbols, is based on a " filter - ing" procedure, consisting of a series of passive-to-active or global- to-local stages. Navon (1977, 1981a) has proposed a...packages or segments. This advances the earlier, static feature aggregation ap- proaches to comprise a "figure." According to the global precedence model
Electron Microscopy Imaging of Zinc Soaps Nucleation in Oil Paint.
Hermans, Joen; Osmond, Gillian; van Loon, Annelies; Iedema, Piet; Chapman, Robyn; Drennan, John; Jack, Kevin; Rasch, Ronald; Morgan, Garry; Zhang, Zhi; Monteiro, Michael; Keune, Katrien
2018-06-04
Using the recently developed techniques of electron tomography, we have explored the first stages of disfiguring formation of zinc soaps in modern oil paintings. The formation of complexes of zinc ions with fatty acids in paint layers is a major threat to the stability and appearance of many late 19th and early 20th century oil paintings. Moreover, the occurrence of zinc soaps in oil paintings leading to defects is disturbingly common, but the chemical reactions and migration mechanisms leading to large zinc soap aggregates or zones remain poorly understood. State-of-the-art scanning (SEM) and transmission (TEM) electron microscopy techniques, primarily developed for biological specimens, have enabled us to visualize the earliest stages of crystalline zinc soap growth in a reconstructed zinc white (ZnO) oil paint sample. In situ sectioning techniques and sequential imaging within the SEM allowed three-dimensional tomographic reconstruction of sample morphology. Improvements in the detection and discrimination of backscattered electrons enabled us to identify local precipitation processes with small atomic number contrast. The SEM images were correlated to low-dose and high-sensitivity TEM images, with high-resolution tomography providing unprecedented insight into the structure of nucleating zinc soaps at the molecular level. The correlative approach applied here to study phase separation, and crystallization processes specific to a problem in art conservation creates possibilities for visualization of phase formation in a wide range of soft materials.
Ontology-based malaria parasite stage and species identification from peripheral blood smear images.
Makkapati, Vishnu V; Rao, Raghuveer M
2011-01-01
The diagnosis and treatment of malaria infection requires detecting the presence of the malaria parasite in the patient as well as identification of the parasite species. We present an image processing-based approach to detect parasites in microscope images of a blood smear and an ontology-based classification of the stage of the parasite for identifying the species of infection. This approach is patterned after the diagnosis approach adopted by a pathologist for visual examination, and hence, is expected to deliver similar results. We formulate several rules based on the morphology of the basic components of a parasite, namely, chromatin dot(s) and cytoplasm, to identify the parasite stage and species. Numerical results are presented for data taken from various patients. A sensitivity of 88% and a specificity of 95% is reported by evaluation of the scheme on 55 images.
The stage of priming: are intertrial repetition effects attentional or decisional?
Becker, Stefanie I
2008-02-01
In a visual search task, reaction times to a target are shorter when its features are repeated than when they switch. The present study investigated whether these priming effects affect the attentional stage of target selection, as proposed by the priming of pop-out account, or whether they modulate performance at a later, post-selectional stage, as claimed by the episodic retrieval view. Secondly, to test whether priming affects only the target-defining feature, or whether priming can apply to all target-features in a holistic fashion, two presentation conditions were invoked, that either promoted encoding of only the target-defining feature or holistic encoding of all target features. Results from four eye tracking experiments involving a size and colour singleton target showed that, first, priming modulates selectional processes concerned with guiding attention. Second, there were traces of holistic priming effects, which however were not modulated by the displays, but by expectation and task difficulty.
NASA Astrophysics Data System (ADS)
Ishizu, Tomohiro; Sakamoto, Yasuhiro
2017-07-01
In this extensive and valuable theoretical article, Pelowski et al. propose a psychological architecture in art appreciation by introducing the concepts of early/bottom-up and relatively late/top-down stages. The former is dictated as automatic processing on perceptual features of visual images, while the latter comprises cognitive and evaluative processes where modulations from acquired knowledge and memories come into play with recurrent loops to form final experiences, as well as brain areas/networks which possibly have a role in each processing component [9].
Fuggetta, Giorgio; Duke, Philip A
2017-05-01
The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.