Sample records for visual stimulus inversion

  1. Inverse Target- and Cue-Priming Effects of Masked Stimuli

    ERIC Educational Resources Information Center

    Mattler, Uwe

    2007-01-01

    The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses…

  2. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  3. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    PubMed Central

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  4. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    PubMed

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  5. Privileged Detection of Conspecifics: Evidence from Inversion Effects during Continuous Flash Suppression

    ERIC Educational Resources Information Center

    Stein, Timo; Sterzer, Philipp; Peelen, Marius V.

    2012-01-01

    The rapid visual detection of other people in our environment is an important first step in social cognition. Here we provide evidence for selective sensitivity of the human visual system to upright depictions of conspecifics. In a series of seven experiments, we assessed the impact of stimulus inversion on the detection of person silhouettes,…

  6. Environmental Inversion Effects in Face Perception

    ERIC Educational Resources Information Center

    Davidenko, Nicolas; Flusberg, Stephen J.

    2012-01-01

    Visual processing is highly sensitive to stimulus orientation; for example, face perception is drastically worse when faces are oriented inverted vs. upright. However, stimulus orientation must be established in relation to a particular reference frame, and in most studies, several reference frames are conflated. Which reference frame(s) matter in…

  7. Inverse target- and cue-priming effects of masked stimuli.

    PubMed

    Mattler, Uwe

    2007-02-01

    The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses are facilitated after dissimilar primes. Previous studies on inverse priming effects examined target-priming effects, which arise when the prime and the target stimuli share features that are critical for the response decision. In contrast, 3 experiments of the present study demonstrate inverse priming effects in a nonmotor cue-priming paradigm. Inverse cue-priming effects exhibited time courses comparable to inverse target-priming effects. Results suggest that inverse priming effects do not arise from specific processes of the response system but follow from operations that are more general.

  8. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    ERIC Educational Resources Information Center

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  9. Cortical dynamics of feature binding and reset: control of visual persistence.

    PubMed

    Francis, G; Grossberg, S; Mingolla, E

    1994-04-01

    An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing of visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greater persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence due to adaptation with a stimulus of like orientation, an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.

  10. Interobject grouping facilitates visual awareness.

    PubMed

    Stein, Timo; Kaiser, Daniel; Peelen, Marius V

    2015-01-01

    In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.

  11. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods

    PubMed Central

    Cocco, Simona; Leibler, Stanislas; Monasson, Rémi

    2009-01-01

    Complexity of neural systems often makes impracticable explicit measurements of all interactions between their constituents. Inverse statistical physics approaches, which infer effective couplings between neurons from their spiking activity, have been so far hindered by their computational complexity. Here, we present 2 complementary, computationally efficient inverse algorithms based on the Ising and “leaky integrate-and-fire” models. We apply those algorithms to reanalyze multielectrode recordings in the salamander retina in darkness and under random visual stimulus. We find strong positive couplings between nearby ganglion cells common to both stimuli, whereas long-range couplings appear under random stimulus only. The uncertainty on the inferred couplings due to limitations in the recordings (duration, small area covered on the retina) is discussed. Our methods will allow real-time evaluation of couplings for large assemblies of neurons. PMID:19666487

  12. The perception of (naked only) bodies and faceless heads relies on holistic processing: Evidence from the inversion effect.

    PubMed

    Bonemei, Rob; Costantino, Andrea I; Battistel, Ilenia; Rivolta, Davide

    2018-05-01

    Faces and bodies are more difficult to perceive when presented inverted than when presented upright (i.e., stimulus inversion effect), an effect that has been attributed to the disruption of holistic processing. The features that can trigger holistic processing in faces and bodies, however, still remain elusive. In this study, using a sequential matching task, we tested whether stimulus inversion affects various categories of visual stimuli: faces, faceless heads, faceless heads in body context, headless bodies naked, whole bodies naked, headless bodies clothed, and whole bodies clothed. Both accuracy and inversion efficiency score results show inversion effects for all categories but for clothed bodies (with and without heads). In addition, the magnitude of the inversion effect for face, naked body, and faceless heads was similar. Our findings demonstrate that the perception of faces, faceless heads, and naked bodies relies on holistic processing. Clothed bodies (with and without heads), on the other side, may trigger clothes-sensitive rather than body-sensitive perceptual mechanisms. © 2017 The British Psychological Society.

  13. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  14. Trade-off between curvature tuning and position invariance in visual area V4

    PubMed Central

    Sharpee, Tatyana O.; Kouh, Minjoon; Reynolds, John H.

    2013-01-01

    Humans can rapidly recognize a multitude of objects despite differences in their appearance. The neural mechanisms that endow high-level sensory neurons with both selectivity to complex stimulus features and “tolerance” or invariance to identity-preserving transformations, such as spatial translation, remain poorly understood. Previous studies have demonstrated that both tolerance and selectivity to conjunctions of features are increased at successive stages of the ventral visual stream that mediates visual recognition. Within a given area, such as visual area V4 or the inferotemporal cortex, tolerance has been found to be inversely related to the sparseness of neural responses, which in turn was positively correlated with conjunction selectivity. However, the direct relationship between tolerance and conjunction selectivity has been difficult to establish, with different studies reporting either an inverse or no significant relationship. To resolve this, we measured V4 responses to natural scenes, and using recently developed statistical techniques, we estimated both the relevant stimulus features and the range of translation invariance for each neuron. Focusing the analysis on tuning to curvature, a tractable example of conjunction selectivity, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli. These trade-offs provide empirical support for recent theories of how the visual system estimates 3D shapes from shading and texture flows, as well as the tiling hypothesis of the visual space for different curvature values. PMID:23798444

  15. Understanding the mechanisms behind the sexualized-body inversion hypothesis: The role of asymmetry and attention biases

    PubMed Central

    Carnaghi, Andrea; Mitrovic, Aleksandra; Leder, Helmut; Fantoni, Carlo; Silani, Giorgia

    2018-01-01

    A controversial hypothesis, named the Sexualized Body Inversion Hypothesis (SBIH), claims similar visual processing of sexually objectified women (i.e., with a focus on the sexual body parts) and inanimate objects as indicated by an absence of the inversion effect for both type of stimuli. The current study aims at shedding light into the mechanisms behind the SBIH in a series of 4 experiments. Using a modified version of Bernard et al.´s (2012) visual-matching task, first we tested the core assumption of the SBIH, namely that a similar processing style occurs for sexualized human bodies and objects. In Experiments 1 and 2 a non-sexualized (personalized) condition plus two object-control conditions (mannequins, and houses) were included in the experimental design. Results showed an inversion effect for images of personalized women and mannequins, but not for sexualized women and houses. Second, we explored whether this effect was driven by differences in stimulus asymmetry, by testing the mediating and moderating role of this visual feature. In Experiment 3, we provided the first evidence that not only the sexual attributes of the images but also additional perceptual features of the stimuli, such as their asymmetry, played a moderating role in shaping the inversion effect. Lastly, we investigated the strategy adopted in the visual-matching task by tracking eye movements of the participants. Results of Experiment 4 suggest an association between a specific pattern of visual exploration of the images and the presence of the inversion effect. Findings are discussed with respect to the literature on sexual objectification. PMID:29621249

  16. Visual motion direction is represented in population-level neural response as measured by magnetoencephalography.

    PubMed

    Kaneoke, Y; Urakawa, T; Kakigi, R

    2009-05-19

    We investigated whether direction information is represented in the population-level neural response evoked by the visual motion stimulus, as measured by magnetoencephalography. Coherent motions with varied speed, varied direction, and different coherence level were presented using random dot kinematography. Peak latency of responses to motion onset was inversely related to speed in all directions, as previously reported, but no significant effect of direction on latency changes was identified. Mutual information entropy (IE) calculated using four-direction response data increased significantly (>2.14) after motion onset in 41.3% of response data and maximum IE was distributed at approximately 20 ms after peak response latency. When response waveforms showing significant differences (by multivariate discriminant analysis) in distribution of the three waveform parameters (peak amplitude, peak latency, and 75% waveform width) with stimulus directions were analyzed, 87 waveform stimulus directions (80.6%) were correctly estimated using these parameters. Correct estimation rate was unaffected by stimulus speed, but was affected by coherence level, even though both speed and coherence affected response amplitude similarly. Our results indicate that speed and direction of stimulus motion are represented in the distinct properties of a response waveform, suggesting that the human brain processes speed and direction separately, at least in part.

  17. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  18. Monkey Visual Short-Term Memory Directly Compared to Humans

    PubMed Central

    Elmore, L. Caitlin; Wright, Anthony A.

    2015-01-01

    Two adult rhesus monkeys were trained to detect which item in an array of memory items had changed using the same stimuli, viewing times, and delays as used with humans. Although the monkeys were extensively trained, they were less accurate than humans with the same array sizes (2, 4, & 6 items), with both stimulus types (colored squares, clip art), and showed calculated memory capacities of about one item (or less). Nevertheless, the memory results from both monkeys and humans for both stimulus types were well characterized by the inverse power-law of display size. This characterization provides a simple and straightforward summary of a fundamental process of visual short-term memory (how VSTM declines with memory load) that emphasizes species similarities based upon similar functional relationships. By more closely matching of monkey testing parameters to those of humans, the similar functional relationships strengthen the evidence suggesting similar processes underlying monkey and human VSTM. PMID:25706544

  19. Numerosity processing in early visual cortex.

    PubMed

    Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo

    2017-08-15

    While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.

    PubMed

    Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T

    2012-01-02

    Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  2. Expertise for upright faces improves the precision but not the capacity of visual working memory.

    PubMed

    Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank

    2014-10-01

    Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.

  3. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  4. Dynamic mapping of the human visual cortex by high-speed magnetic resonance imaging.

    PubMed Central

    Blamire, A M; Ogawa, S; Ugurbil, K; Rothman, D; McCarthy, G; Ellermann, J M; Hyder, F; Rattner, Z; Shulman, R G

    1992-01-01

    We report the use of high-speed magnetic resonance imaging to follow the changes in image intensity in the human visual cortex during stimulation by a flashing checkerboard stimulus. Measurements were made in a 2.1-T, 1-m-diameter magnet, part of a Bruker Biospec spectrometer that we had programmed to do echo-planar imaging. A 15-cm-diameter surface coil was used to transmit and receive signals. Images were acquired during periods of stimulation from 2 s to 180 s. Images were acquired in 65.5 ms in a 10-mm slice with in-plane voxel size of 6 x 3 mm. Repetition time (TR) was generally 2 s, although for the long flashing periods, TR = 8 s was used. Voxels were located onto an inversion recovery image taken with 2 x 2 mm in-plane resolution. Image intensity increased after onset of the stimulus. The mean change in signal relative to the prestimulation level (delta S/S) was 9.7% (SD = 2.8%, n = 20) with an echo time of 70 ms. Irrespective of the period of stimulation, the increase in magnetic resonance signal intensity was delayed relative to the stimulus. The mean delay measured from the start of stimulation for each protocol was as follows: 2-s stimulation, delay = 3.5 s (SD = 0.5 s, n = 10) (the delay exceeds stimulus duration); 20- to 24-s stimulation, delay = 5 s (SD = 2 s, n = 20). PMID:1438317

  5. Gamma and Beta Oscillations in Human MEG Encode the Contents of Vibrotactile Working Memory.

    PubMed

    von Lautz, Alexander H; Herding, Jan; Ludwig, Simon; Nierhaus, Till; Maess, Burkhard; Villringer, Arno; Blankenburg, Felix

    2017-01-01

    Ample evidence suggests that oscillations in the beta band represent quantitative information about somatosensory features during stimulus retention. Visual and auditory working memory (WM) research, on the other hand, has indicated a predominant role of gamma oscillations for active WM processing. Here we reconciled these findings by recording whole-head magnetoencephalography during a vibrotactile frequency comparison task. A Braille stimulator presented healthy subjects with a vibration to the left fingertip that was retained in WM for comparison with a second stimulus presented after a short delay. During this retention interval spectral power in the beta band from the right intraparietal sulcus and inferior frontal gyrus (IFG) monotonically increased with the to-be-remembered vibrotactile frequency. In contrast, induced gamma power showed the inverse of this pattern and decreased with higher stimulus frequency in the right IFG. Together, these results expand the previously established role of beta oscillations for somatosensory WM to the gamma band and give further evidence that quantitative information may be processed in a fronto-parietal network.

  6. Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.

    PubMed

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F

    2003-04-15

    When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.

  7. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    PubMed

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  8. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  9. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  10. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  11. Time-resolved neuroimaging of visual short term memory consolidation by post-perceptual attention shifts.

    PubMed

    Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan

    2016-01-15

    Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. TMS effects on subjective and objective measures of vision: stimulation intensity and pre- versus post-stimulus masking.

    PubMed

    de Graaf, Tom A; Cornelsen, Sonja; Jacobs, Christianne; Sack, Alexander T

    2011-12-01

    Transcranial magnetic stimulation (TMS) can be used to mask visual stimuli, disrupting visual task performance or preventing visual awareness. While TMS masking studies generally fix stimulation intensity, we hypothesized that varying the intensity of TMS pulses in a masking paradigm might inform several ongoing debates concerning TMS disruption of vision as measured subjectively versus objectively, and pre-stimulus (forward) versus post-stimulus (backward) TMS masking. We here show that both pre-stimulus TMS pulses and post-stimulus TMS pulses could strongly mask visual stimuli. We found no dissociations between TMS effects on the subjective and objective measures of vision for any masking window or intensity, ruling out the option that TMS intensity levels determine whether dissociations between subjective and objective vision are obtained. For the post-stimulus time window particularly, we suggest that these data provide new constraints for (e.g. recurrent) models of vision and visual awareness. Finally, our data are in line with the idea that pre-stimulus masking operates differently from conventional post-stimulus masking. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  14. Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements

    PubMed Central

    McFarland, James M.; Cumming, Bruce G.

    2016-01-01

    The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801

  15. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  16. Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation.

    PubMed

    Tapia, Evelina; Beck, Diane M

    2014-01-01

    A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.

  17. Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus

    NASA Astrophysics Data System (ADS)

    Deng, Zishan; Gao, Yuan; Li, Ting

    2018-02-01

    As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.

  18. Stimulus Effects on Local Preference: Stimulus-Response Contingencies, Stimulus-Food Pairing, and Stimulus-Food Correlation

    ERIC Educational Resources Information Center

    Davison, Michael; Baum, William M.

    2010-01-01

    Four pigeons were trained in a procedure in which concurrent-schedule food ratios changed unpredictably across seven unsignaled components after 10 food deliveries. Additional green-key stimulus presentations also occurred on the two alternatives, sometimes in the same ratio as the component food ratio, and sometimes in the inverse ratio. In eight…

  19. Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance

    PubMed Central

    Veniero, Domenica

    2017-01-01

    Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794

  20. Synergistic interaction between baclofen administration into the median raphe nucleus and inconsequential visual stimuli on investigatory behavior of rats

    PubMed Central

    Vollrath-Smith, Fiori R.; Shin, Rick

    2011-01-01

    Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820

  1. Effects of age, gender, and stimulus presentation period on visual short-term memory.

    PubMed

    Kunimi, Mitsunobu

    2016-01-01

    This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.

  2. Visual and auditory accessory stimulus offset and the Simon effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-10-01

    We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.

  3. Recording from two neurons: second-order stimulus reconstruction from spike trains and population coding.

    PubMed

    Fernandes, N M; Pinto, B D L; Almeida, L O B; Slaets, J F W; Köberle, R

    2010-10-01

    We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.

  4. Probing feedforward and feedback contributions to awareness with visual masking and transcranial magnetic stimulation

    PubMed Central

    Tapia, Evelina; Beck, Diane M.

    2014-01-01

    A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness. PMID:25374548

  5. Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface.

    PubMed

    Ng, Kian B; Bradley, Andrew P; Cunnington, Ross

    2012-06-01

    The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.

  6. Stimulus specificity of a steady-state visual-evoked potential-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Ng, Kian B.; Bradley, Andrew P.; Cunnington, Ross

    2012-06-01

    The mechanisms of neural excitation and inhibition when given a visual stimulus are well studied. It has been established that changing stimulus specificity such as luminance contrast or spatial frequency can alter the neuronal activity and thus modulate the visual-evoked response. In this paper, we study the effect that stimulus specificity has on the classification performance of a steady-state visual-evoked potential-based brain-computer interface (SSVEP-BCI). For example, we investigate how closely two visual stimuli can be placed before they compete for neural representation in the cortex and thus influence BCI classification accuracy. We characterize stimulus specificity using the four stimulus parameters commonly encountered in SSVEP-BCI design: temporal frequency, spatial size, number of simultaneously displayed stimuli and their spatial proximity. By varying these quantities and measuring the SSVEP-BCI classification accuracy, we are able to determine the parameters that provide optimal performance. Our results show that superior SSVEP-BCI accuracy is attained when stimuli are placed spatially more than 5° apart, with size that subtends at least 2° of visual angle, when using a tagging frequency of between high alpha and beta band. These findings may assist in deciding the stimulus parameters for optimal SSVEP-BCI design.

  7. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  8. Gestalt perception modulates early visual processing.

    PubMed

    Herrmann, C S; Bosch, V

    2001-04-17

    We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.

  9. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  10. Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli

    PubMed Central

    Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.

    2013-01-01

    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939

  11. Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory

    PubMed Central

    Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong

    2010-01-01

    Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833

  12. Concurrent visual and tactile steady-state evoked potentials index allocation of inter-modal attention: a frequency-tagging study.

    PubMed

    Porcu, Emanuele; Keitel, Christian; Müller, Matthias M

    2013-11-27

    We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study.

    PubMed

    Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M

    2017-11-01

    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Hemispheric differences in visual search of simple line arrays.

    PubMed

    Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W

    1990-01-01

    The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.

  15. Task- and age-dependent effects of visual stimulus properties on children's explicit numerosity judgments.

    PubMed

    Defever, Emmy; Reynvoet, Bert; Gebuis, Titia

    2013-10-01

    Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Different target-discrimination times can be followed by the same saccade-initiation timing in different stimulus conditions during visual searches

    PubMed Central

    Tanaka, Tomohiro; Nishida, Satoshi

    2015-01-01

    The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344

  17. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  18. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-08-01

    Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.

  20. Eccentricity effects in vision and attention.

    PubMed

    Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe

    2016-11-01

    Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Spatial updating in human parietal cortex

    NASA Technical Reports Server (NTRS)

    Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.

    2003-01-01

    Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.

  2. Moving Stimuli Facilitate Synchronization But Not Temporal Perception

    PubMed Central

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419

  3. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    PubMed

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  4. Look before you leap: sensory memory improves decision making.

    PubMed

    Vlassova, Alexandra; Pearson, Joel

    2013-09-01

    Simple decisions require the processing and evaluation of perceptual and cognitive information, the formation of a decision, and often the execution of a motor response. This process involves the accumulation of evidence over time until a particular choice reaches a decision threshold. Using a random-dot-motion stimulus, we showed that simply delaying responses after the stimulus offset can almost double accuracy, even in the absence of new incoming visual information. However, under conditions in which the otherwise blank interval was filled with a sensory mask or concurrent working memory load was high, performance gains were lost. Further, memory and perception showed equivalent rates of evidence accumulation, suggesting a high-capacity memory store. We propose an account of continued evidence accumulation by sequential sampling from a simultaneously decaying memory trace. Memories typically decay with time, hence immediate inquiry trumps later recall from memory. However, the results we report here show the inverse: Inspecting a memory trumps viewing the actual object.

  5. Dynamic reweighting of three modalities for sensor fusion.

    PubMed

    Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J

    2014-01-01

    We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.

  6. Effects of visual cue and response assignment on spatial stimulus coding in stimulus-response compatibility.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2012-01-01

    Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.

  7. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  8. Subliminal perception of complex visual stimuli.

    PubMed

    Ionescu, Mihai Radu

    2016-01-01

    Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.

  9. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    PubMed

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  10. Role of inter-hemispheric transfer in generating visual evoked potentials in V1-damaged brain hemispheres

    PubMed Central

    Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.

    2015-01-01

    Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450

  11. The Effect of Visual Threat on Spatial Attention to Touch

    ERIC Educational Resources Information Center

    Poliakoff, Ellen; Miles, Eleanor; Li, Xinying; Blanchette, Isabelle

    2007-01-01

    Viewing a threatening stimulus can bias visual attention toward that location. Such effects have typically been investigated only in the visual modality, despite the fact that many threatening stimuli are most dangerous when close to or in contact with the body. Recent multisensory research indicates that a neutral visual stimulus, such as a light…

  12. Differential modulation of visual object processing in dorsal and ventral stream by stimulus visibility.

    PubMed

    Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido

    2016-10-01

    As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Myoelectric stimulation on peroneal muscles resists simulated ankle sprain motion.

    PubMed

    Fong, Daniel Tik-Pui; Chu, Vikki Wing-Shan; Chan, Kai-Ming

    2012-07-26

    The inadequate reaction time of the peroneal muscles in response to an incorrect foot contact event has been proposed as one of the etiological factors contributing to ankle joint inversion injury. Thus, the current study aimed to investigate the efficacy of a myoelectric stimulation applied to the peroneal muscles in the prevention of a simulated ankle inversion trauma. Ten healthy male subjects performed simulated inversion and supination tests on a pair of mechanical sprain simulators. An electrical signal was delivered to the peroneal muscles of the subjects through a pair of electrode pads. The start of the stimulus was synchronized with the drop of the sprain simulator's platform. In order to determine the maximum delay time which the stimulus could still resist the simulated ankle sprain motion, different delay time were test (0, 5, 10, and 15ms). Together with the control trial (no stimulus), there were 5 testing conditions for both simulated inversion and supination test. The effect was quantified by the drop in maximum ankle tilting angle and angular velocity, as determined by a motion analysis system with a standard laboratory procedure. Results showed that the myoelectric stimulation was effective in all conditions except the one with myoelectric stimulus delayed for 15ms in simulated supination test. It is concluded that myoelectric stimulation on peroneal muscles could resist an ankle spraining motion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Visual Masking During Pursuit Eye Movements

    ERIC Educational Resources Information Center

    White, Charles W.

    1976-01-01

    Visual masking occurs when one stimulus interferes with the perception of another stimulus. Investigates which matters more for visual masking--that the target and masking stimuli are flashed on the same part of the retina, or, that the target and mask appear in the same place. (Author/RK)

  15. Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.

    PubMed

    Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D

    2013-10-01

    Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Do People Take Stimulus Correlations into Account in Visual Search (Open Source)

    DTIC Science & Technology

    2016-03-10

    RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple

  17. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  18. Teaching Equivalence Relations to Individuals with Minimal Verbal Repertoires: Are Visual and Auditory-Visual Discriminations Predictive of Stimulus Equivalence?

    ERIC Educational Resources Information Center

    Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina

    2005-01-01

    The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…

  19. Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.

    PubMed

    Williams, Jason A

    2012-06-01

    The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.

  20. Effect of eye position during human visual-vestibular integration of heading perception.

    PubMed

    Crane, Benjamin T

    2017-09-01

    Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.

  1. Stimulus Dependence of Correlated Variability across Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163

  2. Object form discontinuity facilitates displacement discrimination across saccades.

    PubMed

    Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl

    2010-06-01

    Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.

  3. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2013-09-01

    right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant

  4. Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation

    NASA Technical Reports Server (NTRS)

    O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.

    2006-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.

  5. Finding an emotional face in a crowd: emotional and perceptual stimulus factors influence visual search efficiency.

    PubMed

    Lundqvist, Daniel; Bruce, Neil; Öhman, Arne

    2015-01-01

    In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.

  6. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    PubMed

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  8. Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry

    PubMed Central

    Jaworska, Katarzyna; Lages, Martin

    2014-01-01

    Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063

  9. Reduced attention and increased impulsivity in mice lacking NPY Y2 receptors: relation to anxiolytic-like phenotype.

    PubMed

    Greco, Barbara; Carli, Mirjana

    2006-05-15

    Neuropeptide (NPY) Y2 receptors play an important role in some anxiety-related and stress-related behaviours in mice. Changes in the level of anxiety can affect some cognitive functions such as memory, attention and inhibitory response control. We investigated the effects of NPY Y2 receptor deletion (Y2(-/-)) in mice on visual attention and response control using the five-choice serial reaction time (5-CSRT) task in which accuracy of detection of a brief visual stimulus across five spatial locations may serve as a valid behavioural index of attentional functioning. Anticipatory and perseverative responses provide a measure of inhibitory response control. During training, the Y2(-/-) mice had lower accuracy (% correct), and made more anticipatory responses. At stimulus durations of 2 and 4s the Y2(-/-) were as accurate as the Y2(+/+) mice but still more impulsive than Y(+/+). At stimulus durations of 0.25 and 0.5s both groups performed worse but the Y2(-/-) mice made significantly fewer correct responses than the Y2(+/+) controls. The anxiolytic drug diazepam at 2mg/kg IP greatly increased the anticipatory responding of Y2(-/-) mice compared to Y2(+/+). The anxiogenic inverse benzodiazepine agonist, FG 7142, at 10mg/kg IP reduced the anticipatory responding of Y2(-/-) but not Y2(+/+) mice. These data suggest that NPY Y2 receptors make an important contribution to mechanisms controlling attentional functioning and "impulsivity". They also show that "impulsivity" of NPY Y2(-/-) mice may depend on their level of anxiety. These findings may help in understanding the pathophysiology of stress disorders and depression.

  10. Differential effects of ongoing EEG beta and theta power on memory formation

    PubMed Central

    Scholz, Sebastian; Schneider, Signe Luisa

    2017-01-01

    Recently, elevated ongoing pre-stimulus beta power (13–17 Hz) at encoding has been associated with subsequent memory formation for visual stimulus material. It is unclear whether this activity is merely specific to visual processing or whether it reflects a state facilitating general memory formation, independent of stimulus modality. To answer that question, the present study investigated the relationship between neural pre-stimulus oscillations and verbal memory formation in different sensory modalities. For that purpose, a within-subject design was employed to explore differences between successful and failed memory formation in the visual and auditory modality. Furthermore, associative memory was addressed by presenting the stimuli in combination with background images. Results revealed that similar EEG activity in the low beta frequency range (13–17 Hz) is associated with subsequent memory success, independent of stimulus modality. Elevated power prior to stimulus onset differentiated successful from failed memory formation. In contrast, differential effects between modalities were found in the theta band (3–7 Hz), with an increased oscillatory activity before the onset of later remembered visually presented words. In addition, pre-stimulus theta power dissociated between successful and failed encoding of associated context, independent of the stimulus modality of the item itself. We therefore suggest that increased ongoing low beta activity reflects a memory promoting state, which is likely to be moderated by modality-independent attentional or inhibitory processes, whereas high ongoing theta power is suggested as an indicator of the enhanced binding of incoming interlinked information. PMID:28192459

  11. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.

    PubMed

    Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel

    2014-08-01

    Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.

  14. Effects of Systematically Depriving Access to Computer-Based Stimuli on Choice Responding with Individuals with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Reyer, Howard S.; Sturmey, Peter

    2009-01-01

    Three adults with intellectual disabilities participated to investigate the effects of reinforcer deprivation on choice responding. The experimenter identified the most preferred audio-visual (A-V) stimulus and the least preferred visual-only stimulus for each participant. Participants did not have access to the A-V stimulus for 5 min, 5 and 24 h.…

  15. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  16. Stimulus onset predictability modulates proactive action control in a Go/No-go task

    PubMed Central

    Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2015-01-01

    The aim of the study was to evaluate whether the presence/absence of visual cues specifying the onset of an upcoming, action-related stimulus modulates pre-stimulus brain activity, associated with the proactive control of goal-directed actions. To this aim we asked 12 subjects to perform an equal probability Go/No-go task with four stimulus configurations in two conditions: (1) uncued, i.e., without any external information about the timing of stimulus onset; and (2) cued, i.e., with external visual cues providing precise information about the timing of stimulus onset. During task both behavioral performance and event-related potentials (ERPs) were recorded. Behavioral results showed faster response times in the cued than uncued condition, confirming existing literature. ERPs showed novel results in the proactive control stage, that started about 1 s before the motor response. We observed a slow rising prefrontal positive activity, more pronounced in the cued than the uncued condition. Further, also pre-stimulus activity of premotor areas was larger in cued than uncued condition. In the post-stimulus period, the P3 amplitude was enhanced when the time of stimulus onset was externally driven, confirming that external cueing enhances processing of stimulus evaluation and response monitoring. Our results suggest that different pre-stimulus processing come into play in the two conditions. We hypothesize that the large prefrontal and premotor activities recorded with external visual cues index the monitoring of the external stimuli in order to finely regulate the action. PMID:25964751

  17. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    PubMed Central

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  18. Modality-dependent effect of motion information in sensory-motor synchronised tapping.

    PubMed

    Ono, Kentaro

    2018-05-14

    Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  20. Optical images of visible and invisible percepts in the primary visual cortex of primates

    PubMed Central

    Macknik, Stephen L.; Haglund, Michael M.

    1999-01-01

    We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363

  1. Size matters: large objects capture attention in visual search.

    PubMed

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  2. Stimulus change as a factor in response maintenance with free food available.

    PubMed Central

    Osborne, S R; Shelby, M

    1975-01-01

    Rats bar pressed for food on a reinforcement schedule in which every response was reinforced, even though a dish of pellets was present. Initially, auditory and visual stimuli accompanied response-produced food presentation. With stimulus feedback as an added consequence of bar pressing, responding was maintained in the presence of free food; without stimulus feedback, responding decreased to a low level. Auditory feedback maintained slightly more responding than did visual feedback, and both together maintained more responding than did either separately. Almost no responding occurred when the only consequence of bar pressing was stimulus feedback. The data indicated conditioned and sensory reinforcement effects of response-produced stimulus feedback. PMID:1202121

  3. The Influence of Stimulus Material on Attention and Performance in the Visual Expectation Paradigm: A Longitudinal Study with 3- And 6-Month-Old Infants

    ERIC Educational Resources Information Center

    Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun

    2012-01-01

    This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…

  4. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  5. Neural processing of visual information under interocular suppression: a critical review

    PubMed Central

    Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido

    2014-01-01

    When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469

  6. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner

    PubMed Central

    Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.

    2013-01-01

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388

  7. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    PubMed Central

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  8. Square or sine: finding a waveform with high success rate of eliciting SSVEP.

    PubMed

    Teng, Fei; Chen, Yixin; Choong, Aik Min; Gustafson, Scott; Reichley, Christopher; Lawhead, Pamela; Waddell, Dwight

    2011-01-01

    Steady state visual evoked potential (SSVEP) is the brain's natural electrical potential response for visual stimuli at specific frequencies. Using a visual stimulus flashing at some given frequency will entrain the SSVEP at the same frequency, thereby allowing determination of the subject's visual focus. The faster an SSVEP is identified, the higher information transmission rate the system achieves. Thus, an effective stimulus, defined as one with high success rate of eliciting SSVEP and high signal-noise ratio, is desired. Also, researchers observed that harmonic frequencies often appear in the SSVEP at a reduced magnitude. Are the harmonics in the SSVEP elicited by the fundamental stimulating frequency or by the artifacts of the stimuli? In this paper, we compare the SSVEP responses of three periodic stimuli: square wave (with different duty cycles), triangle wave, and sine wave to find an effective stimulus. We also demonstrate the connection between the strength of the harmonics in SSVEP and the type of stimulus.

  9. Stimulus-dependent modulation of spontaneous low-frequency oscillations in the rat visual cortex.

    PubMed

    Huang, Liangming; Liu, Yadong; Gui, Jianjun; Li, Ming; Hu, Dewen

    2014-08-06

    Research on spontaneous low-frequency oscillations is important to reveal underlying regulatory mechanisms in the brain. The mechanism for the stimulus modulation of low-frequency oscillations is not known. Here, we used the intrinsic optical imaging technique to examine stimulus-modulated low-frequency oscillation signals in the rat visual cortex. The stimulation was presented monocularly as a flashing light with different frequencies and intensities. The phases of low-frequency oscillations in different regions tended to be synchronized and the rhythms typically accelerated within a 30-s period after stimulation. These phenomena were confined to visual stimuli with specific flashing frequencies (12.5-17.5 Hz) and intensities (5-10 mA). The acceleration and synchronization induced by the flashing frequency were more marked than those induced by the intensity. These results show that spontaneous low-frequency oscillations can be modulated by parameter-dependent flashing lights and indicate the potential utility of the visual stimulus paradigm in exploring the origin and function of low-frequency oscillations.

  10. Perceptual grouping across eccentricity.

    PubMed

    Tannazzo, Teresa; Kurylo, Daniel D; Bukhari, Farhan

    2014-10-01

    Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions.

    PubMed

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-08-04

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.

  12. A Unifying Motif for Spatial and Directional Surround Suppression.

    PubMed

    Liu, Liu D; Miller, Kenneth D; Pack, Christopher C

    2018-01-24

    In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.

  13. Temporal properties of material categorization and material rating: visual vs non-visual material features.

    PubMed

    Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki

    2015-10-01

    Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Optic nerve dysfunction during gravity inversion. Visual field abnormalities.

    PubMed

    Sanborn, G E; Friberg, T R; Allen, R

    1987-06-01

    Inversion in a head-down position (gravity inversion) results in an intraocular pressure of 35 to 40 mm Hg in normal subjects. We used computerized static perimetry to measure the visual fields of normal subjects during gravity inversion. There were no visual field changes in the central 6 degrees of the visual field compared with the baseline (preinversion) values. However, when the central 30 degrees of the visual field was tested, reversible visual field defects were found in 11 of 19 eyes. We believe that the substantial elevation of intraocular pressure during gravity inversion may pose potential risks to the eyes, and we recommend that inversion for extended periods of time be avoided.

  15. Effects of Temporal Features and Order on the Apparent duration of a Visual Stimulus

    PubMed Central

    Bruno, Aurelio; Ayhan, Inci; Johnston, Alan

    2012-01-01

    The apparent duration of a visual stimulus has been shown to be influenced by its speed. For low speeds, apparent duration increases linearly with stimulus speed. This effect has been ascribed to the number of changes that occur within a visual interval. Accordingly, a higher number of changes should produce an increase in apparent duration. In order to test this prediction, we asked subjects to compare the relative duration of a 10-Hz drifting comparison stimulus with a standard stimulus that contained a different number of changes in different conditions. The standard could be static, drifting at 10 Hz, or mixed (a combination of variable duration static and drifting intervals). In this last condition the number of changes was intermediate between the static and the continuously drifting stimulus. For all standard durations, the mixed stimulus looked significantly compressed (∼20% reduction) relative to the drifting stimulus. However, no difference emerged between the static (that contained no changes) and the mixed stimuli (which contained an intermediate number of changes). We also observed that when the standard was displayed first, it appeared compressed relative to when it was displayed second with a magnitude that depended on standard duration. These results are at odds with a model of time perception that simply reflects the number of temporal features within an interval in determining the perceived passing of time. PMID:22461778

  16. Comparing different stimulus configurations for population receptive field mapping in human fMRI

    PubMed Central

    Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel

    2015-01-01

    Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620

  17. [Microcomputer control of a LED stimulus display device].

    PubMed

    Ohmoto, S; Kikuchi, T; Kumada, T

    1987-02-01

    A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.

  18. Beyond a mask and against the bottleneck: retroactive dual-task interference during working memory consolidation of a masked visual target.

    PubMed

    Nieuwenstein, Mark; Wyble, Brad

    2014-06-01

    While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Cortical Neural Synchronization Underlies Primary Visual Consciousness of Qualia: Evidence from Event-Related Potentials

    PubMed Central

    Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana

    2016-01-01

    This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750

  20. Response-specifying cue for action interferes with perception of feature-sharing stimuli.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-06-01

    Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.

  1. Neural Pathways Conveying Novisual Information to the Visual Cortex

    PubMed Central

    2013-01-01

    The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972

  2. A neural correlate of working memory in the monkey primary visual cortex.

    PubMed

    Supèr, H; Spekreijse, H; Lamme, V A

    2001-07-06

    The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.

  3. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.

    PubMed

    Persuh, Marjan; Melara, Robert D

    2016-01-01

    In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  4. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object

    PubMed Central

    Persuh, Marjan; Melara, Robert D.

    2016-01-01

    In two experiments, we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision. PMID:27047362

  5. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  6. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  7. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  8. Chromatic spatial contrast sensitivity estimated by visual evoked cortical potential and psychophysics

    PubMed Central

    Barboni, M.T.S.; Gomes, B.D.; Souza, G.S.; Rodrigues, A.R.; Ventura, D.F.; Silveira, L.C.L.

    2013-01-01

    The purpose of the present study was to measure contrast sensitivity to equiluminant gratings using steady-state visual evoked cortical potential (ssVECP) and psychophysics. Six healthy volunteers were evaluated with ssVECPs and psychophysics. The visual stimuli were red-green or blue-yellow horizontal sinusoidal gratings, 5° × 5°, 34.3 cd/m2 mean luminance, presented at 6 Hz. Eight spatial frequencies from 0.2 to 8 cpd were used, each presented at 8 contrast levels. Contrast threshold was obtained by extrapolating second harmonic amplitude values to zero. Psychophysical contrast thresholds were measured using stimuli at 6 Hz and static presentation. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. ssVECP and both psychophysical contrast sensitivity functions (CSFs) were low-pass functions for red-green gratings. For electrophysiology, the highest contrast sensitivity values were found at 0.4 cpd (1.95 ± 0.15). ssVECP CSF was similar to dynamic psychophysical CSF, while static CSF had higher values ranging from 0.4 to 6 cpd (P < 0.05, ANOVA). Blue-yellow chromatic functions showed no specific tuning shape; however, at high spatial frequencies the evoked potentials showed higher contrast sensitivity than the psychophysical methods (P < 0.05, ANOVA). Evoked potentials can be used reliably to evaluate chromatic red-green CSFs in agreement with psychophysical thresholds, mainly if the same temporal properties are applied to the stimulus. For blue-yellow CSF, correlation between electrophysiology and psychophysics was poor at high spatial frequency, possibly due to a greater effect of chromatic aberration on this kind of stimulus. PMID:23369980

  9. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    PubMed

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. [Analysis of electrically evoked response (EER) in relation to the central visual pathway of the cat (1). Wave shape of the cat EER].

    PubMed

    Fukatsu, Y; Miyake, Y; Sugita, S; Saito, A; Watanabe, S

    1990-11-01

    To analyze the Electrically evoked response (EER) in relation to the central visual pathway, the authors studied the properties of wave patterns and peak latencies of EER in 35 anesthetized adult cats. The cat EER showed two early positive waves on outward current (cornea cathode) stimulus and three or four early positive waves on inward current (cornea anode) stimulus. These waves were recorded within 50 ms after stimulus onset, and were the most consistent components in cat EER. The stimulus threshold for EER showed a less individual variation than amplitude. The difference of stimulus threshold between outward and inward current stimulus was also essentially negligible. The stimulus threshold was higher in early components than in late components. The peak latency of EER became shorter and the amplitude became higher, as the stimulus intensity was increased. However, this tendency was reversed and some wavelets started to appear when the stimulus was extremely strong. The recording using short stimulus duration and bipolar electrodes enabled us to reduce the electrical artifact of EER. These results obtained from cats were compared with those of humans and rabbits.

  11. Effect of ethanol on the visual-evoked potential in rat: dynamics of ON and OFF responses.

    PubMed

    Dulinskas, Redas; Buisas, Rokas; Vengeliene, Valentina; Ruksenas, Osvaldas

    2017-01-01

    The effect of acute ethanol administration on the flash visual-evoked potential (VEP) was investigated in numerous studies. However, it is still unclear which brain structures are responsible for the differences observed in stimulus onset (ON) and offset (OFF) responses and how these responses are modulated by ethanol. The aim of our study was to investigate the pattern of ON and OFF responses in the visual system, measured as amplitude and latency of each VEP component following acute administration of ethanol. VEPs were recorded at the onset and offset of a 500 ms visual stimulus in anesthetized male Wistar rats. The effect of alcohol on VEP latency and amplitude was measured for one hour after injection of 2 g/kg ethanol dose. Three VEP components - N63, P89 and N143 - were analyzed. Our results showed that, except for component N143, ethanol increased the latency of both ON and OFF responses in a similar manner. The latency of N143 during OFF response was not affected by ethanol but its amplitude was reduced. Our study demonstrated that the activation of the visual system during the ON response to a 500 ms visual stimulus is qualitatively different from that during the OFF response. Ethanol interfered with processing of the stimulus duration at the level of the visual cortex and reduced the activation of cortical regions.

  12. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Benolken, Martha S.

    1993-01-01

    The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  13. Contrast sensitivity test and conventional and high frequency audiometry: information beyond that required to prescribe lenses and headsets

    NASA Astrophysics Data System (ADS)

    Comastri, S. A.; Martin, G.; Simon, J. M.; Angarano, C.; Dominguez, S.; Luzzi, F.; Lanusse, M.; Ranieri, M. V.; Boccio, C. M.

    2008-04-01

    In Optometry and in Audiology, the routine tests to prescribe correction lenses and headsets are respectively the visual acuity test (the first chart with letters was developed by Snellen in 1862) and conventional pure tone audiometry (the first audiometer with electrical current was devised by Hartmann in 1878). At present there are psychophysical non invasive tests that, besides evaluating visual and auditory performance globally and even in cases catalogued as normal according to routine tests, supply early information regarding diseases such as diabetes, hypertension, renal failure, cardiovascular problems, etc. Concerning Optometry, one of these tests is the achromatic luminance contrast sensitivity test (introduced by Schade in 1956). Concerning Audiology, one of these tests is high frequency pure tone audiometry (introduced a few decades ago) which yields information relative to pathologies affecting the basal cochlea and complements data resulting from conventional audiometry. These utilities of the contrast sensitivity test and of pure tone audiometry derive from the facts that Fourier components constitute the basis to synthesize stimuli present at the entrance of the visual and auditory systems; that these systems responses depend on frequencies and that the patient's psychophysical state affects frequency processing. The frequency of interest in the former test is the effective spatial frequency (inverse of the angle subtended at the eye by a cycle of a sinusoidal grating and measured in cycles/degree) and, in the latter, the temporal frequency (measured in cycles/sec). Both tests have similar duration and consist in determining the patient's threshold (corresponding to the inverse multiplicative of the contrast or to the inverse additive of the sound intensity level) for each harmonic stimulus present at the system entrance (sinusoidal grating or pure tone sound). In this article the frequencies, standard normality curves and abnormal threshold shifts inherent to the contrast sensitivity test (which for simplicity could be termed "visionmetry") and to pure tone audiometry (also termed auditory sensitivity test) are analyzed with the purpose of contributing to divulge their ability to supply early information associated to pathologies not solely related to the visual and auditory systems respectively.

  14. On the role of covarying functions in stimulus class formation and transfer of function.

    PubMed Central

    Markham, Rebecca G; Markham, Michael R

    2002-01-01

    This experiment investigated whether directly trained covarying functions are necessary for stimulus class formation and transfer of function in humans. Initial class training was designed to establish two respondent-based stimulus classes by pairing two visual stimuli with shock and two other visual stimuli with no shock. Next, two operant discrimination functions were trained to one stimulus of each putative class. The no-shock group received the same training and testing in all phases, except no stimuli were ever paired with shock. The data indicated that skin conductance response conditioning did not occur for the shock groups or for the no-shock group. Tests showed transfer of the established discriminative functions, however, only for the shock groups, indicating the formation of two stimulus classes only for those participants who received respondent class training. The results suggest that transfer of function does not depend on first covarying the stimulus class functions. PMID:12507017

  15. Visual spatial attention enhances the amplitude of positive and negative fMRI responses to visual stimulation in an eccentricity-dependent manner.

    PubMed

    Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A

    2013-06-07

    Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Orienting attention in visual space by nociceptive stimuli: investigation with a temporal order judgment task based on the adaptive PSI method.

    PubMed

    Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry

    2017-07-01

    Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.

  17. Perceived duration decreases with increasing eccentricity.

    PubMed

    Kliegl, Katrin M; Huckauf, Anke

    2014-07-01

    Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury

    NASA Astrophysics Data System (ADS)

    Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.

    2008-02-01

    Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.

  20. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Electrophysiological evidence for phenomenal consciousness.

    PubMed

    Revonsuo, Antti; Koivisto, Mika

    2010-09-01

    Abstract Recent evidence from event-related brain potentials (ERPs) lends support to two central theses in Lamme's theory. The earliest ERP correlate of visual consciousness appears over posterior visual cortex around 100-200 ms after stimulus onset. Its scalp topography and time window are consistent with recurrent processing in the visual cortex. This electrophysiological correlate of visual consciousness is mostly independent of later ERPs reflecting selective attention and working memory functions. Overall, the ERP evidence supports the view that phenomenal consciousness of a visual stimulus emerges earlier than access consciousness, and that attention and awareness are served by distinct neural processes.

  2. Visual encoding impairment in patients with schizophrenia: contribution of reduced working memory span, decreased processing speed, and affective symptoms.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Huerta-Ramos, Elena; Ochoa, Susana; Usall, Judith; Abellán-Vega, Helena; Roca, Mercedes; Haro, Josep Maria

    2015-01-01

    Previous research has revealed the contribution of decreased processing speed and reduced working memory span in verbal and visual memory impairment in patients with schizophrenia. The role of affective symptoms in verbal memory has also emerged in a few studies. The authors designed a picture recognition task to investigate the impact of these factors on visual encoding. Two types of pictures (black and white vs. colored) were presented under 2 different conditions of context encoding (either displayed at a specific location or in association with another visual stimulus). It was assumed that the process of encoding associated pictures was more effortful than that of encoding pictures that were presented alone. Working memory span and processing speed were assessed. In the patient group, working memory span was significantly associated with the recognition of the associated pictures but not significantly with that of the other pictures. Controlling for processing speed eliminated the patients' deficit in the recognition of the colored pictures and greatly reduced their deficit in the recognition of the black-and-white pictures. The recognition of the black-and-white pictures was inversely related to anxiety in men and to depression in women. Working memory span constrains the effortful visual encoding processes in patients, whereas processing speed decrement accounts for most of their visual encoding deficit. Affective symptoms also have an impact on visual encoding, albeit differently in men and women. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  3. Social cichlid fish change behaviour in response to a visual predator stimulus, but not the odour of damaged conspecifics.

    PubMed

    O'Connor, Constance M; Reddon, Adam R; Odetunde, Aderinsola; Jindal, Shagun; Balshine, Sigal

    2015-12-01

    Predation is one of the primary drivers of fitness for prey species. Therefore, there should be strong selection for accurate assessment of predation risk, and whenever possible, individuals should use all available information to fine-tune their response to the current threat of predation. Here, we used a controlled laboratory experiment to assess the responses of individual Neolamprologus pulcher, a social cichlid fish, to a live predator stimulus, to the odour of damaged conspecifics, or to both indicators of predation risk combined. We found that fish in the presence of the visual predator stimulus showed typical antipredator behaviour. Namely, these fish decreased activity and exploration, spent more time seeking shelter, and more time near conspecifics. Surprisingly, there was no effect of the chemical cue alone, and fish showed a reduced response to the combination of the visual predator stimulus and the odour of damaged conspecifics relative to the visual predator stimulus alone. These results demonstrate that N. pulcher adjust their anti-predator behaviour to the information available about current predation risk, and we suggest a possible role for the use of social information in the assessment of predation risk in a cooperatively breeding fish. Copyright © 2015. Published by Elsevier B.V.

  4. The path to memory is guided by strategy: distinct networks are engaged in associative encoding under visual and verbal strategy and influence memory performance in healthy and impaired individuals

    PubMed Central

    Hales, J. B.; Brewer, J. B.

    2018-01-01

    Given the diversity of stimuli encountered in daily life, a variety of strategies must be used for learning new information. Relating and encoding visual and verbal stimuli into memory has been probed using various tasks and stimulus-types. Engagement of specific subsequent memory and cortical processing regions depends on the stimulus modality of studied material; however, it remains unclear whether different encoding strategies similarly influence regional activity when stimulus-type is held constant. In this study, subjects encoded object pairs using a visual or verbal associative strategy during functional magnetic resonance imaging (fMRI), and subsequent memory was assessed for pairs encoded under each strategy. Each strategy elicited distinct regional processing and subsequent memory effects: middle / superior frontal, lateral parietal, and lateral occipital for visually-associated pairs and inferior frontal, medial frontal, and medial occipital for verbally-associated pairs. This regional selectivity mimics the effects of stimulus modality, suggesting that cortical involvement in associative encoding is driven by strategy, and not simply by stimulus-type. The clinical relevance of these findings, probed in two patients with recent aphasic strokes, suggest that training with strategies utilizing unaffected cortical regions might improve memory ability in patients with brain damage. PMID:22390467

  5. Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses

    PubMed Central

    2016-01-01

    Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062

  6. Does dorsolateral prefrontal cortex (DLPFC) activation return to baseline when sexual stimuli cease? The role of DLPFC in visual sexual stimulation.

    PubMed

    Leon-Carrion, Jose; Martín-Rodríguez, Juan Francisco; Damas-López, Jesús; Pourrezai, Kambiz; Izzetoglu, Kurtulus; Barroso Y Martin, Juan Manuel; Dominguez-Morales, M Rosario

    2007-04-06

    A fundamental question in human sexuality regards the neural substrate underlying sexually-arousing representations. Lesion and neuroimaging studies suggest that dorsolateral pre-frontal cortex (DLPFC) plays an important role in regulating the processing of visual sexual stimulation. The aim of this Functional Near-Infrared Spectroscopy (fNIRS) study was to explore DLPFC structures involved in the processing of erotic and non-sexual films. fNIRS was used to image the evoked-cerebral blood oxygenation (CBO) response in 15 male and 15 female subjects. Our hypothesis is that a sexual stimulus would produce DLPFC activation during the period of direct stimulus perception ("on" period), and that this activation would continue after stimulus cessation ("off" period). A new paradigm was used to measure the relative oxygenated hemoglobin (oxyHb) concentrations in DLPFC while subjects viewed the two selected stimuli (Roman orgy and a non-sexual film clip), and also immediately following stimulus cessation. Viewing of the non-sexual stimulus produced no overshoot in DLPFC, whereas exposure to the erotic stimulus produced rapidly ascendant overshoot, which became even more pronounced following stimulus cessation. We also report on gender differences in the timing and intensity of DLPFC activation in response to a sexually explicit visual stimulus. We found evidence indicating that men experience greater and more rapid sexual arousal when exposed to erotic stimuli than do women. Our results point out that self-regulation of DLPFC activation is modulated by subjective arousal and that cognitive appraisal of the sexual stimulus (valence) plays a secondary role in this regulation.

  7. Multiple serial picture presentation with millisecond resolution using a three-way LC-shutter-tachistoscope

    PubMed Central

    Fischmeister, Florian Ph.S.; Leodolter, Ulrich; Windischberger, Christian; Kasess, Christian H.; Schöpf, Veronika; Moser, Ewald; Bauer, Herbert

    2010-01-01

    Throughout recent years there has been an increasing interest in studying unconscious visual processes. Such conditions of unawareness are typically achieved by either a sufficient reduction of the stimulus presentation time or visual masking. However, there are growing concerns about the reliability of the presentation devices used. As all these devices show great variability in presentation parameters, the processing of visual stimuli becomes dependent on the display-device, e.g. minimal changes in the physical stimulus properties may have an enormous impact on stimulus processing by the sensory system and on the actual experience of the stimulus. Here we present a custom-built three-way LC-shutter-tachistoscope which allows experimental setups with both, precise and reliable stimulus delivery, and millisecond resolution. This tachistoscope consists of three LCD-projectors equipped with zoom lenses to enable stimulus presentation via a built-in mirror-system onto a back projection screen from an adjacent room. Two high-speed liquid crystal shutters are mounted serially in front of each projector to control the stimulus duration. To verify the intended properties empirically, different sequences of presentation times were performed while changes in optical power were measured using a photoreceiver. The obtained results demonstrate that interfering variabilities in stimulus parameters and stimulus rendering are markedly reduced. Together with the possibility to collect external signals and to send trigger-signals to other devices, this tachistoscope represents a highly flexible and easy to set up research tool not only for the study of unconscious processing in the brain but for vision research in general. PMID:20122963

  8. Is Conscious Stimulus Identification Dependent on Knowledge of the Perceptual Modality? Testing the “Source Misidentification Hypothesis”

    PubMed Central

    Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim

    2013-01-01

    This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677

  9. Putative inhibitory training of a stimulus makes it a facilitator: a within-subject comparison of visual and auditory stimuli in autoshaping.

    PubMed

    Nakajima, S

    2000-03-14

    Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.

  10. Shades of yellow: interactive effects of visual and odour cues in a pest beetle

    PubMed Central

    Stevenson, Philip C.; Belmain, Steven R.

    2016-01-01

    Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707

  11. Occipital Alpha Activity during Stimulus Processing Gates the Information Flow to Object-Selective Cortex

    PubMed Central

    Zumer, Johanna M.; Scheeringa, René; Schoffelen, Jan-Mathijs; Norris, David G.; Jensen, Ole

    2014-01-01

    Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity. PMID:25333286

  12. Cortical and Subcortical Coordination of Visual Spatial Attention Revealed by Simultaneous EEG-fMRI Recording.

    PubMed

    Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G

    2017-08-16

    Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.

  13. Top-Down Beta Enhances Bottom-Up Gamma

    PubMed Central

    Thompson, William H.

    2017-01-01

    Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697

  14. Preattentive binding of auditory and visual stimulus features.

    PubMed

    Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo

    2005-02-01

    We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

  15. Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images

    PubMed Central

    Funahashi, Shintaro

    2016-01-01

    Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424

  16. Establishing Auditory-Tactile-Visual Equivalence Classes in Children with Autism and Developmental Delays

    ERIC Educational Resources Information Center

    Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb

    2017-01-01

    The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…

  17. Components of Attention Modulated by Temporal Expectation

    ERIC Educational Resources Information Center

    Sørensen, Thomas Alrik; Vangkilde, Signe; Bundesen, Claus

    2015-01-01

    By varying the probabilities that a stimulus would appear at particular times after the presentation of a cue and modeling the data by the theory of visual attention (Bundesen, 1990), Vangkilde, Coull, and Bundesen (2012) provided evidence that the speed of encoding a singly presented stimulus letter into visual short-term memory (VSTM) is…

  18. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  19. Blur adaptation: contrast sensitivity changes and stimulus extent.

    PubMed

    Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda

    2015-05-01

    A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Blood Oxygen Level-Dependent Activation of the Primary Visual Cortex Predicts Size Adaptation Illusion

    PubMed Central

    Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta

    2016-01-01

    In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504

  1. Response properties of ON-OFF retinal ganglion cells to high-order stimulus statistics.

    PubMed

    Xiao, Lei; Gong, Han-Yan; Gong, Hai-Qing; Liang, Pei-Ji; Zhang, Pu-Ming

    2014-10-17

    The visual stimulus statistics are the fundamental parameters to provide the reference for studying visual coding rules. In this study, the multi-electrode extracellular recording experiments were designed and implemented on bullfrog retinal ganglion cells to explore the neural response properties to the changes in stimulus statistics. The changes in low-order stimulus statistics, such as intensity and contrast, were clearly reflected in the neuronal firing rate. However, it was difficult to distinguish the changes in high-order statistics, such as skewness and kurtosis, only based on the neuronal firing rate. The neuronal temporal filtering and sensitivity characteristics were further analyzed. We observed that the peak-to-peak amplitude of the temporal filter and the neuronal sensitivity, which were obtained from either neuronal ON spikes or OFF spikes, could exhibit significant changes when the high-order stimulus statistics were changed. These results indicate that in the retina, the neuronal response properties may be reliable and powerful in carrying some complex and subtle visual information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Distinct roles of visual, parietal, and frontal motor cortices in memory-guided sensorimotor decisions

    PubMed Central

    Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka

    2016-01-01

    Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481

  3. A versatile stereoscopic visual display system for vestibular and oculomotor research.

    PubMed

    Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S

    1998-01-01

    Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.

  4. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    PubMed

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  5. Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex

    PubMed Central

    Singer, Wolf; Maass, Wolfgang

    2009-01-01

    It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205

  6. Short-term memory for event duration: modality specificity and goal dependency.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2012-11-01

    Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.

  7. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  8. An investigation of the spatial selectivity of the duration after-effect.

    PubMed

    Maarseveen, Jim; Hogendoorn, Hinze; Verstraten, Frans A J; Paffen, Chris L E

    2017-01-01

    Adaptation to the duration of a visual stimulus causes the perceived duration of a subsequently presented stimulus with a slightly different duration to be skewed away from the adapted duration. This pattern of repulsion following adaptation is similar to that observed for other visual properties, such as orientation, and is considered evidence for the involvement of duration-selective mechanisms in duration encoding. Here, we investigated whether the encoding of duration - by duration-selective mechanisms - occurs early on in the visual processing hierarchy. To this end, we investigated the spatial specificity of the duration after-effect in two experiments. We measured the duration after-effect at adapter-test distances ranging between 0 and 15° of visual angle and for within- and between-hemifield presentations. We replicated the duration after-effect: the test stimulus was perceived to have a longer duration following adaptation to a shorter duration, and a shorter duration following adaptation to a longer duration. Importantly, this duration after-effect occurred at all measured distances, with no evidence for a decrease in the magnitude of the after-effect at larger distances or across hemifields. This shows that adaptation to duration does not result from adaptation occurring early on in the visual processing hierarchy. Instead, it seems likely that duration information is a high-level stimulus property that is encoded later on in the visual processing hierarchy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Simon Effect with and without Awareness of the Accessory Stimulus

    ERIC Educational Resources Information Center

    Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena

    2006-01-01

    The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…

  10. Sensitivity and integration in a visual pathway for circadian entrainment in the hamster (Mesocricetus auratus).

    PubMed Central

    Nelson, D E; Takahashi, J S

    1991-01-01

    1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235

  11. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  12. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  13. Sexual attraction to others: a comparison of two models of alloerotic responding in men.

    PubMed

    Blanchard, Ray; Kuban, Michael E; Blak, Thomas; Klassen, Philip E; Dickey, Robert; Cantor, James M

    2012-02-01

    The penile response profiles of homosexual and heterosexual pedophiles, hebephiles, and teleiophiles to laboratory stimuli depicting male and female children and adults may be conceptualized as a series of overlapping stimulus generalization gradients. This study used such profile data to compare two models of alloerotic responding (sexual responding to other people) in men. The first model was based on the notion that men respond to a potential sexual object as a compound stimulus made up of an age component and a gender component. The second model was based on the notion that men respond to a potential sexual object as a gestalt, which they evaluate in terms of global similarity to other potential sexual objects. The analytic strategy was to compare the accuracy of these models in predicting a man's penile response to each of his less arousing (nonpreferred) stimulus categories from his response to his most arousing (preferred) stimulus category. Both models based their predictions on the degree of dissimilarity between the preferred stimulus category and a given nonpreferred stimulus category, but each model used its own measure of dissimilarity. According to the first model ("summation model"), penile response should vary inversely as the sum of stimulus differences on separate dimensions of age and gender. According to the second model ("bipolar model"), penile response should vary inversely as the distance between stimulus categories on a single, bipolar dimension of morphological similarity-a dimension on which children are located near the middle, and adult men and women are located at opposite ends. The subjects were 2,278 male patients referred to a specialty clinic for phallometric assessment of their erotic preferences. Comparisons of goodness of fit to the observed data favored the unidimensional bipolar model.

  14. Evaluation of an organic light-emitting diode display for precise visual stimulation.

    PubMed

    Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji

    2013-06-11

    A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.

  15. Emotional facilitation of sensory processing in the visual cortex.

    PubMed

    Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2003-01-01

    A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emotional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleasant pictures. Early selective encoding of pleasant and unpleasant images was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late positive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.

  16. High-resolution eye tracking using V1 neuron activity

    PubMed Central

    McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.

    2014-01-01

    Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783

  17. Neuronal population coding of perceived and memorized visual features in the lateral prefrontal cortex

    PubMed Central

    Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.

    2017-01-01

    The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756

  18. The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave

    PubMed Central

    Muller, Lyle; Reynaud, Alexandre; Chavane, Frédéric; Destexhe, Alain

    2014-01-01

    Propagating waves occur in many excitable media and were recently found in neural systems from retina to neocortex. While propagating waves are clearly present under anaesthesia, whether they also appear during awake and conscious states remains unclear. One possibility is that these waves are systematically missed in trial-averaged data, due to variability. Here we present a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, we show that the stimulus-evoked population response in primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fibre network of superficial cortical layers. Propagating waves with similar properties occur independently in secondary visual cortex, but maintain precise phase relations with the waves in primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatiotemporal frame for further neuronal interactions. PMID:24770473

  19. Retinotopic patterns of background connectivity between V1 and fronto-parietal cortex are modulated by task demands

    PubMed Central

    Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.

    2015-01-01

    Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320

  20. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  1. Can responses to basic non-numerical visual features explain neural numerosity responses?

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Does bimodal stimulus presentation increase ERP components usable in BCIs?

    NASA Astrophysics Data System (ADS)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.

    2012-08-01

    Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.

  3. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Benolken, M. S.

    1995-01-01

    The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  4. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    PubMed

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Contralateral cortical organisation of information in visual short-term memory: evidence from lateralized brain activity during retrieval.

    PubMed

    Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Roberto; McDonald, John J; Jolicœur, Pierre

    2012-07-01

    We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a determination of orientation. Retrieval of information from VSTM was associated with an event-related lateralization (ERL) with a contralateral negativity relative to the visual field from which the probed stimulus was originally encoded, suggesting a lateralized organization of VSTM. The scalp distribution of the retrieval ERL was more anterior than what is usually associated with simple maintenance activity, which is consistent with the involvement of different brain structures for these distinct visual memory mechanisms. Experiment 2 was like Experiment 1, but used an unbalanced memory array consisting of one lateral color stimulus in a hemifield and one color stimulus on the vertical mid-line. This design enabled us to separate lateralized activity related to target retrieval from distractor processing. Target retrieval was found to generate a negative-going ERL at electrode sites found in Experiment 1, and suggested representations were retrieved from anterior cortical structures. Distractor processing elicited a positive-going ERL at posterior electrodes sites, which could be indicative of a return to baseline of retention activity for the discarded memory of the now-irrelevant stimulus, or an active inhibition mechanism mediating distractor suppression. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Retinotopic Maps, Spatial Tuning, and Locations of Human Visual Areas in Surface Coordinates Characterized with Multifocal and Blocked fMRI Designs

    PubMed Central

    Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo

    2012-01-01

    The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626

  7. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  8. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  9. The effect of visual salience on memory-based choices.

    PubMed

    Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J

    2014-02-01

    Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.

  10. Feasibility and performance evaluation of generating and recording visual evoked potentials using ambulatory Bluetooth based system.

    PubMed

    Ellingson, Roger M; Oken, Barry

    2010-01-01

    Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.

  11. Electrocortical amplification for emotionally arousing natural scenes: The contribution of luminance and chromatic visual channels

    PubMed Central

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas

    2015-01-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949

  12. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Visual short-term memory load suppresses temporo-parietal junction activity and induces inattentional blindness.

    PubMed

    Todd, J Jay; Fougnie, Daryl; Marois, René

    2005-12-01

    The right temporo-parietal junction (TPJ) is critical for stimulus-driven attention and visual awareness. Here we show that as the visual short-term memory (VSTM) load of a task increases, activity in this region is increasingly suppressed. Correspondingly, increasing VSTM load impairs the ability of subjects to consciously detect the presence of a novel, unexpected object in the visual field. These results not only demonstrate that VSTM load suppresses TPJ activity and induces inattentional blindness, but also offer a plausible neural mechanism for this perceptual deficit: suppression of the stimulus-driven attentional network.

  14. The rapid distraction of attentional resources toward the source of incongruent stimulus input during multisensory conflict.

    PubMed

    Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G

    2013-04-01

    Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.

  15. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  16. Examining the Reinforcement-Enhancement Effects of Phencyclidine and Its Interactions with Nicotine on Lever-Pressing for a Visual Stimulus

    PubMed Central

    Swalve, Natashia; Barrett, Scott T.; Bevins, Rick A.; Li, Ming

    2015-01-01

    Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: 1) it produces discrepant results on overall reward, similar to that seen with nicotine and 2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0 mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. PMID:26026783

  17. 10-Month-Olds Visually Anticipate an Outcome Contingent on Their Own Action

    ERIC Educational Resources Information Center

    Kenward, Ben

    2010-01-01

    It is known that young infants can learn to perform an action that elicits a reinforcer, and that they can visually anticipate a predictable stimulus by looking at its location before it begins. Here, in an investigation of the display of these abilities in tandem, I report that 10-month-olds anticipate a reward stimulus that they generate through…

  18. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    PubMed Central

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  19. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  1. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  2. Aural, visual, and pictorial stimulus formats in false recall.

    PubMed

    Beauchamp, Heather M

    2002-12-01

    The present investigation is an initial simultaneous examination of the influence of three stimulus formats on false memories. Several pilot tests were conducted to develop new category associate stimulus lists. 73 women and 26 men (M age=21.1 yr.) were in one of three conditions: they either heard words, were shown words, or were shown pictures highly related to critical nonpresented items. As expected, recall of critical nonpresented stimuli was significantly greater for aural lists than for visually presented words and pictorial images. These findings demonstrate that the accuracy of memory is influenced by the format of the information encoded.

  3. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  4. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  5. Visual adaptation and novelty responses in the superior colliculus

    PubMed Central

    Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.

    2011-01-01

    The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319

  6. Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies".

    PubMed

    Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia

    2012-10-01

    A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.

  7. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate.

    PubMed

    Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.

  8. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate

    PubMed Central

    van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347

  9. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  10. The Effect of Optokinetic Stimulation on Perceptual and Postural Symptoms in Visual Vestibular Mismatch Patients.

    PubMed

    Van Ombergen, Angelique; Lubeck, Astrid J; Van Rompaey, Vincent; Maes, Leen K; Stins, John F; Van de Heyning, Paul H; Wuyts, Floris L; Bos, Jelte E

    2016-01-01

    Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it.

  11. Modulation of visual physiology by behavioral state in monkeys, mice, and flies.

    PubMed

    Maimon, Gaby

    2011-08-01

    When a monkey attends to a visual stimulus, neurons in visual cortex respond differently to that stimulus than when the monkey attends elsewhere. In the 25 years since the initial discovery, the study of attention in primates has been central to understanding flexible visual processing. Recent experiments demonstrate that visual neurons in mice and fruit flies are modulated by locomotor behaviors, like running and flying, in a manner that resembles attention-based modulations in primates. The similar findings across species argue for a more generalized view of state-dependent sensory processing and for a renewed dialogue among vertebrate and invertebrate research communities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Spatial attention improves reliability of fMRI retinotopic mapping signals in occipital and parietal cortex

    PubMed Central

    Bressler, David W.; Silver, Michael A.

    2010-01-01

    Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961

  13. GABA(A) receptors in visual and auditory cortex and neural activity changes during basic visual stimulation.

    PubMed

    Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg

    2012-01-01

    Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.

  14. Turning Configural Processing Upside Down: Part and Whole Body Postures

    ERIC Educational Resources Information Center

    Reed, Catherine L.; Stone, Valerie E.; Grubb, Jefferson D.; McGoldrick, John E.

    2006-01-01

    Like faces, body postures are susceptible to an inversion effect in untrained viewers. The inversion effect may be indicative of configural processing, but what kind of configural processing is used for the recognition of body postures must be specified. The information available in the body stimulus was manipulated. The presence and magnitude of…

  15. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  16. Contralateral Cortical Organisation of Information in Visual Short-Term Memory: Evidence from Lateralized Brain Activity during Retrieval

    ERIC Educational Resources Information Center

    Fortier-Gauthier, Ulysse; Moffat, Nicolas; Dell'Acqua, Robert; McDonald, John J.; Jolicoeur, Pierre

    2012-01-01

    We studied brain activity during retention and retrieval phases of two visual short-term memory (VSTM) experiments. Experiment 1 used a balanced memory array, with one color stimulus in each hemifield, followed by a retention interval and a central probe, at the fixation point that designated the target stimulus in memory about which to make a…

  17. Stimulus meanings alter illusory self-motion (vection)--experimental examination of the train illusion.

    PubMed

    Seno, Takeharu; Fukuda, Haruaki

    2012-01-01

    Over the last 100 years, numerous studies have examined the effective visual stimulus properties for inducing illusory self-motion (known as vection). This vection is often experienced more strongly in daily life than under controlled experimental conditions. One well-known example of vection in real life is the so-called 'train illusion'. In the present study, we showed that this train illusion can also be generated in the laboratory using virtual computer graphics-based motion stimuli. We also demonstrated that this vection can be modified by altering the meaning of the visual stimuli (i.e., top down effects). Importantly, we show that the semantic meaning of a stimulus can inhibit or facilitate vection, even when there is no physical change to the stimulus.

  18. Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model

    PubMed Central

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806

  19. Bottlenecks of motion processing during a visual glance: the leaky flask model.

    PubMed

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.

  20. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  1. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  3. Effects of inverting contour and features on processing for static and dynamic face perception: an MEG study.

    PubMed

    Miki, Kensaku; Takeshima, Yasuyuki; Watanabe, Shoko; Honda, Yukiko; Kakigi, Ryusuke

    2011-04-06

    We investigated the effects of inverting facial contour (hair and chin) and features (eyes, nose and mouth) on processing for static and dynamic face perception using magnetoencephalography (MEG). We used apparent motion, in which the first stimulus (S1) was replaced by a second stimulus (S2) with no interstimulus interval and subjects perceived visual motion, and presented three conditions as follows: (1) U&U: Upright contour and Upright features, (2) U&I: Upright contour and Inverted features, and (3) I&I: Inverted contour and Inverted features. In static face perception (S1 onset), the peak latency of the fusiform area's activity, which was related to static face perception, was significantly longer for U&I and I&I than for U&U in the right hemisphere and for U&I than for U&U and I&I in the left. In dynamic face perception (S2 onset), the strength (moment) of the occipitotemporal area's activity, which was related to dynamic face perception, was significantly larger for I&I than for U&U and U&I in the right hemisphere, but not the left. These results can be summarized as follows: (1) in static face perception, the activity of the right fusiform area was more affected by the inversion of features while that of the left fusiform area was more affected by the disruption of the spatial relation between the contour and features, and (2) in dynamic face perception, the activity of the right occipitotemporal area was affected by the inversion of the facial contour. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Maximising information recovery from rank-order codes

    NASA Astrophysics Data System (ADS)

    Sen, B.; Furber, S.

    2007-04-01

    The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.

  5. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  6. Place avoidance learning and memory in a jumping spider.

    PubMed

    Peckmezian, Tina; Taylor, Phillip W

    2017-03-01

    Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.

  7. Dissociation between Neural Signatures of Stimulus and Choice in Population Activity of Human V1 during Perceptual Decision-Making

    PubMed Central

    Choe, Kyoung Whan; Blake, Randolph

    2014-01-01

    Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. PMID:24523561

  8. Reduced Perceptual Exclusivity during Object and Grating Rivalry in Autism

    PubMed Central

    Freyberg, J.; Robertson, C.E.; Baron-Cohen, S.

    2015-01-01

    Background The dynamics of binocular rivalry may be a behavioural footprint of excitatory and inhibitory neural transmission in visual cortex. Given the presence of atypical visual features in Autism Spectrum Conditions (ASC), and evidence in support of the idea of an imbalance in excitatory/inhibitory neural transmission in ASC, we hypothesized that binocular rivalry might prove a simple behavioural marker of such a transmission imbalance in the autistic brain. In support of this hypothesis, we previously reported a slower rate of rivalry in ASC, driven by reduced perceptual exclusivity. Methods We tested whether atypical dynamics of binocular rivalry in ASC are specific to certain stimulus features. 53 participants (26 with ASC, matched for age, sex and IQ) participated in binocular rivalry experiments in which the dynamics of rivalry were measured at two levels of stimulus complexity, low (grayscale gratings) and high (coloured objects). Results Individuals with ASC experienced a slower rate of rivalry, driven by longer transitional states between dominant percepts. These exaggerated transitional states were present at both low and high levels of stimulus complexity, suggesting that atypical rivalry dynamics in autism are robust with respect to stimulus choice. Interactions between stimulus properties and rivalry dynamics in autism indicate that achromatic grating stimuli produce stronger group differences. Conclusion These results confirm the finding of atypical dynamics of binocular rivalry in ASC. These dynamics were present for stimuli of both low and high levels of visual complexity, suggesting an imbalance in competitive interactions throughout the visual system of individuals with ASC. PMID:26382002

  9. Decoding and reconstructing color from responses in human visual cortex.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2009-11-04

    How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.

  10. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    PubMed

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  11. Simple and powerful visual stimulus generator.

    PubMed

    Kremlácek, J; Kuba, M; Kubová, Z; Vít, F

    1999-02-01

    We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.

  12. Examining the reinforcement-enhancement effects of phencyclidine and its interactions with nicotine on lever-pressing for a visual stimulus.

    PubMed

    Swalve, Natashia; Barrett, Scott T; Bevins, Rick A; Li, Ming

    2015-09-15

    Nicotine is a widely-abused drug, yet its primary reinforcing effect does not seem potent as other stimulants such as cocaine. Recent research on the contributing factors toward chronic use of nicotine-containing products has implicated the role of reinforcement-enhancing effects of nicotine. The present study investigates whether phencyclidine (PCP) may also possess a reinforcement-enhancement effect and how this may interact with the reinforcement-enhancement effect of nicotine. PCP was tested for two reasons: (1) it produces discrepant results on overall reward, similar to that seen with nicotine and (2) it may elucidate how other compounds may interact with the reinforcement-enhancement of nicotine. Adult male Sprague-Dawley rats were trained to lever press for brief visual stimulus presentations under fixed-ratio (FR) schedules of reinforcement and then were tested with nicotine (0.2 or 0.4 mg/kg) and/or PCP (2.0mg/kg) over six increasing FR values. A selective increase in active lever-pressing for the visual stimulus with drug treatment was considered evidence of a reinforcement-enhancement effect. PCP and nicotine separately increased active lever pressing for a visual stimulus in a dose-dependent manner and across the different FR schedules. The addition of PCP to nicotine did not increase lever-pressing for the visual stimulus, possibly due to a ceiling effect. The effect of PCP may be driven largely by its locomotor stimulant effects, whereas the effect of nicotine was independent of locomotor stimulation. This dissociation emphasizes that distinct pharmacological properties contribute to the reinforcement-enhancement effects of substances. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. On the use of continuous flash suppression for the study of visual processing outside of awareness

    PubMed Central

    Yang, Eunice; Brascamp, Jan; Kang, Min-Suk; Blake, Randolph

    2014-01-01

    The interocular suppression technique termed continuous flash suppression (CFS) has become an immensely popular tool for investigating visual processing outside of awareness. The emerging picture from studies using CFS is that extensive processing of a visual stimulus, including its semantic and affective content, occurs despite suppression from awareness of that stimulus by CFS. However, the current implementation of CFS in many studies examining processing outside of awareness has several drawbacks that may be improved upon for future studies using CFS. In this paper, we address some of those shortcomings, particularly ones that affect the assessment of unawareness during CFS, and ones to do with the use of “visible” conditions that are often included as a comparison to a CFS condition. We also discuss potential biases in stimulus processing as a result of spatial attention and feature-selective suppression. We suggest practical guidelines that minimize the effects of those limitations in using CFS to study visual processing outside of awareness. PMID:25071685

  14. Effects of perceptual load and socially meaningful stimuli on crossmodal selective attention in Autism Spectrum Disorder and neurotypical samples.

    PubMed

    Tyndall, Ian; Ragless, Liam; O'Hora, Denis

    2018-04-01

    The present study examined whether increasing visual perceptual load differentially affected both Socially Meaningful and Non-socially Meaningful auditory stimulus awareness in neurotypical (NT, n = 59) adults and Autism Spectrum Disorder (ASD, n = 57) adults. On a target trial, an unexpected critical auditory stimulus (CAS), either a Non-socially Meaningful ('beep' sound) or Socially Meaningful ('hi') stimulus, was played concurrently with the presentation of the visual task. Under conditions of low visual perceptual load both NT and ASD samples reliably noticed the CAS at similar rates (77-81%), whether the CAS was Socially Meaningful or Non-socially Meaningful. However, during high visual perceptual load NT and ASD participants reliably noticed the meaningful CAS (NT = 71%, ASD = 67%), but NT participants were unlikely to notice the Non-meaningful CAS (20%), whereas ASD participants reliably noticed it (80%), suggesting an inability to engage selective attention to ignore non-salient irrelevant distractor stimuli in ASD. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. A Correlational Analysis of the Effects of Learner and Linear Programming Characteristics on Learning Programmed Instruction. Final Report.

    ERIC Educational Resources Information Center

    Seibert, Warren F.; Reid, Christopher J.

    Learning and retention may be influenced by subtle instructional stimulus characteristics and certain visual memory aptitudes. Ten stimulus characteristics were chosen for study; 50 sequences of programed instructional material were specially written to conform to sampled values of each stimulus characteristic. Seventy-three freshman subjects…

  16. Alerting Attention and Time Perception in Children.

    ERIC Educational Resources Information Center

    Droit-Volet, Sylvie

    2003-01-01

    Examined effects of a click signaling arrival of a visual stimulus to be timed on temporal discrimination in 3-, 5-, and 8-year-olds. Found that in all groups, the proportion of long responses increased with the stimulus duration, although the steepness of functions increased with age. Stimulus duration was judged longer with than without the…

  17. Order of Stimulus Presentation Influences Children's Acquisition in Receptive Identification Tasks

    ERIC Educational Resources Information Center

    Petursdottir, Anna Ingeborg; Aguilar, Gabriella

    2016-01-01

    Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…

  18. Stimulus Intensity and the Perception of Duration

    ERIC Educational Resources Information Center

    Matthews, William J.; Stewart, Neil; Wearden, John H.

    2011-01-01

    This article explores the widely reported finding that the subjective duration of a stimulus is positively related to its magnitude. In Experiments 1 and 2 we show that, for both auditory and visual stimuli, the effect of stimulus magnitude on the perception of duration depends upon the background: Against a high intensity background, weak stimuli…

  19. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Merging Psychophysical and Psychometric Theory to Estimate Global Visual State Measures from Forced-Choices

    NASA Astrophysics Data System (ADS)

    Massof, Robert W.; Schmidt, Karen M.; Laby, Daniel M.; Kirschen, David; Meadows, David

    2013-09-01

    Visual acuity, a forced-choice psychophysical measure of visual spatial resolution, is the sine qua non of clinical visual impairment testing in ophthalmology and optometry patients with visual system disorders ranging from refractive error to retinal, optic nerve, or central visual system pathology. Visual acuity measures are standardized against a norm, but it is well known that visual acuity depends on a variety of stimulus parameters, including contrast and exposure duration. This paper asks if it is possible to estimate a single global visual state measure from visual acuity measures as a function of stimulus parameters that can represent the patient's overall visual health state with a single variable. Psychophysical theory (at the sensory level) and psychometric theory (at the decision level) are merged to identify the conditions that must be satisfied to derive a global visual state measure from parameterised visual acuity measures. A global visual state measurement model is developed and tested with forced-choice visual acuity measures from 116 subjects with no visual impairments and 560 subjects with uncorrected refractive error. The results are in agreement with the expectations of the model.

  1. Effect of stimulus size and luminance on the rod-, cone-, and melanopsin-mediated pupillary light reflex

    PubMed Central

    Park, Jason C.; McAnany, J. Jason

    2015-01-01

    This study determined if the pupillary light reflex (PLR) driven by brief stimulus presentations can be accounted for by the product of stimulus luminance and area (i.e., corneal flux density, CFD) under conditions biased toward the rod, cone, and melanopsin pathways. Five visually normal subjects participated in the study. Stimuli consisted of 1-s short- and long-wavelength flashes that spanned a large range of luminance and angular subtense. The stimuli were presented in the central visual field in the dark (rod and melanopsin conditions) and against a rod-suppressing short-wavelength background (cone condition). Rod- and cone-mediated PLRs were measured at the maximum constriction after stimulus onset whereas the melanopsin-mediated PLR was measured 5–7 s after stimulus offset. The rod- and melanopsin-mediated PLRs were well accounted for by CFD, such that doubling the stimulus luminance had the same effect on the PLR as doubling the stimulus area. Melanopsin-mediated PLRs were elicited only by short-wavelength, large (>16°) stimuli with luminance greater than 10 cd/m2, but when present, the melanopsin-mediated PLR was well accounted for by CFD. In contrast, CFD could not account for the cone-mediated PLR because the PLR was approximately independent of stimulus size but strongly dependent on stimulus luminance. These findings highlight important differences in how stimulus luminance and size combine to govern the PLR elicited by brief flashes under rod-, cone-, and melanopsin-mediated conditions. PMID:25788707

  2. How Configural Is the Configural Superiority Effect? A Neuroimaging Investigation of Emergent Features in Visual Cortex

    PubMed Central

    Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.

    2017-01-01

    The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924

  3. Cortical response tracking the conscious experience of threshold duration visual stimuli indicates visual perception is all or none

    PubMed Central

    Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.

    2013-01-01

    At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248

  4. Effects of nonspatial selective and divided visual attention on fMRI BOLD responses.

    PubMed

    Weerda, Riklef; Vallines, Ignacio; Thomas, James P; Rutschmann, Roland M; Greenlee, Mark W

    2006-09-01

    Using an uncertainty paradigm and functional magnetic resonance imaging (fMRI) we studied the effect of nonspatial selective and divided visual attention on the activity of specific areas of human extrastriate visual cortex. The stimuli were single ovals that differed from an implicit standard oval in either colour or width. The subjects' task was to classify the current stimulus as one of two possible alternatives per stimulus dimension. Three different experimental conditions were conducted: "colour-certainty", "shape-certainty" and "uncertainty". In all experimental conditions, the stimulus differed in only one stimulus dimension per trial. In the two certainty conditions, the subjects knew in advance which dimension this would be. During the uncertainty condition they had no such previous knowledge and had to monitor both dimensions simultaneously. Statistical analysis of the fMRI data (with SPM2) revealed a modest effect of the attended stimulus dimension on the neural activity in colour sensitive area V4 (more activity during attention to colour) and in shape sensitive area LOC (more activity during attention to shape). Furthermore, cortical areas known to be related to attention and working memory processes (e.g., lateral prefrontal and posterior parietal cortex) exhibit higher activity during the condition of divided attention ("uncertainty") than during that of selective attention ("certainty").

  5. Cholinergic Modulation of Visual Attention and Working Memory: Dissociable Effects of Basal Forebrain 192-IgG-Saporin Lesions and Intraprefrontal Infusions of Scopolamine

    ERIC Educational Resources Information Center

    Chudasama, Yogita; Dalley, Jeffrey W.; Nathwani, Falgyni; Bouger, Pascale; Robbins, Trevor W.

    2004-01-01

    Two experiments examined the effects of reductions in cortical cholinergic function on performance of a novel task that allowed for the simultaneous assessment of attention to a visual stimulus and memory for that stimulus over a variable delay within the same test session. In the first experiment, infusions of the muscarinic receptor antagonist…

  6. Functional significance of the emotion-related late positive potential

    PubMed Central

    Brown, Stephen B. R. E.; van Steenbergen, Henk; Band, Guido P. H.; de Rover, Mischa; Nieuwenhuis, Sander

    2012-01-01

    The late positive potential (LPP) is an event-related potential (ERP) component over visual cortical areas that is modulated by the emotional intensity of a stimulus. However, the functional significance of this neural modulation remains elusive. We conducted two experiments in which we studied the relation between LPP amplitude, subsequent perceptual sensitivity to a non-emotional stimulus (Experiment 1) and visual cortical excitability, as reflected by P1/N1 components evoked by this stimulus (Experiment 2). During the LPP modulation elicited by unpleasant stimuli, perceptual sensitivity was not affected. In contrast, we found some evidence for a decreased N1 amplitude during the LPP modulation, a decreased P1 amplitude on trials with a relatively large LPP, and consistent negative (but non-significant) across-subject correlations between the magnitudes of the LPP modulation and corresponding changes in d-prime or P1/N1 amplitude. The results provide preliminary evidence that the LPP reflects a global inhibition of activity in visual cortex, resulting in the selective survival of activity associated with the processing of the emotional stimulus. PMID:22375117

  7. Dynamic Network Communication in the Human Functional Connectome Predicts Perceptual Variability in Visual Illusion.

    PubMed

    Wang, Zhiwei; Zeljic, Kristina; Jiang, Qinying; Gu, Yong; Wang, Wei; Wang, Zheng

    2018-01-01

    Ubiquitous variability between individuals in visual perception is difficult to standardize and has thus essentially been ignored. Here we construct a quantitative psychophysical measure of illusory rotary motion based on the Pinna-Brelstaff figure (PBF) in 73 healthy volunteers and investigate the neural circuit mechanisms underlying perceptual variation using functional magnetic resonance imaging (fMRI). We acquired fMRI data from a subset of 42 subjects during spontaneous and 3 stimulus conditions: expanding PBF, expanding modified-PBF (illusion-free) and expanding modified-PBF with physical rotation. Brain-wide graph analysis of stimulus-evoked functional connectivity patterns yielded a functionally segregated architecture containing 3 discrete hierarchical networks, commonly shared between rest and stimulation conditions. Strikingly, communication efficiency and strength between 2 networks predominantly located in visual areas robustly predicted individual perceptual differences solely in the illusory stimulus condition. These unprecedented findings demonstrate that stimulus-dependent, not spontaneous, dynamic functional integration between distributed brain networks contributes to perceptual variability in humans. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Revealing hidden states in visual working memory using electroencephalography

    PubMed Central

    Wolff, Michael J.; Ding, Jacqueline; Myers, Nicholas E.; Stokes, Mark G.

    2015-01-01

    It is often assumed that information in visual working memory (vWM) is maintained via persistent activity. However, recent evidence indicates that information in vWM could be maintained in an effectively “activity-silent” neural state. Silent vWM is consistent with recent cognitive and neural models, but poses an important experimental problem: how can we study these silent states using conventional measures of brain activity? We propose a novel approach that is analogous to echolocation: using a high-contrast visual stimulus, it may be possible to drive brain activity during vWM maintenance and measure the vWM-dependent impulse response. We recorded electroencephalography (EEG) while participants performed a vWM task in which a randomly oriented grating was remembered. Crucially, a high-contrast, task-irrelevant stimulus was shown in the maintenance period in half of the trials. The electrophysiological response from posterior channels was used to decode the orientations of the gratings. While orientations could be decoded during and shortly after stimulus presentation, decoding accuracy dropped back close to baseline in the delay. However, the visual evoked response from the task-irrelevant stimulus resulted in a clear re-emergence in decodability. This result provides important proof-of-concept for a promising and relatively simple approach to decode “activity-silent” vWM content using non-invasive EEG. PMID:26388748

  9. Gait bradykinesia in Parkinson's disease: a change in the motor program which controls the synergy of gait.

    PubMed

    Warabi, Tateo; Furuyama, Hiroyasu; Sugai, Eri; Kato, Masamichi; Yanagisawa, Nobuo

    2018-01-01

    This study examined how gait bradykinesia is changed by the motor programming in Parkinson's disease. Thirty-five idiopathic Parkinson's disease patients and nine age-matched healthy subjects participated in this study. After the patients fixated on a visual-fixation target (conditioning-stimulus), the voluntary-gait was triggered by a visual on-stimulus. While the subject walked on a level floor, soleus, tibialis anterior EMG latencies, and the y-axis-vector of the sole-floor reaction force were examined. Three paradigms were used to distinguish between the off-/on-latencies. The gap-task: the visual-fixation target was turned off; 200 ms before the on-stimulus was engaged (resulting in a 200 ms-gap). EMG latency was not influenced by the visual-fixation target. The overlap-task: the on-stimulus was turned on during the visual-fixation target presentation (200 ms-overlap). The no-gap-task: the fixation target was turned off and the on-stimulus was turned on simultaneously. The onset of EMG pause following the tonic soleus EMG was defined as the off-latency of posture (termination). The onset of the tibialis anterior EMG burst was defined as the on-latency of gait (initiation). In the gap-task, the on-latency was unchanged in all of the subjects. In Parkinson's disease, the visual-fixation target prolonged both the off-/on-latencies in the overlap-task. In all tasks, the off-latency was prolonged and the off-/on-latencies were unsynchronized, which changed the synergic movement to a slow, short-step-gait. The synergy of gait was regulated by two independent sensory-motor programs of the off- and on-latency levels. In Parkinson's disease, the delayed gait initiation was due to the difficulty in terminating the sensory-motor program which controls the subject's fixation. The dynamic gait bradykinesia was involved in the difficulty (long off-latency) in terminating the motor program of the prior posture/movement.

  10. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. The Effect of Optokinetic Stimulation on Perceptual and Postural Symptoms in Visual Vestibular Mismatch Patients

    PubMed Central

    Van Rompaey, Vincent; Maes, Leen K.; Stins, John F.; Van de Heyning, Paul H.

    2016-01-01

    Background Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Objective Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Methods Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. Results No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. Conclusions VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it. PMID:27128970

  12. A Novel Interhemispheric Interaction: Modulation of Neuronal Cooperativity in the Visual Areas

    PubMed Central

    Carmeli, Cristian; Lopez-Aguado, Laura; Schmidt, Kerstin E.; De Feo, Oscar; Innocenti, Giorgio M.

    2007-01-01

    Background The cortical representation of the visual field is split along the vertical midline, with the left and the right hemi-fields projecting to separate hemispheres. Connections between the visual areas of the two hemispheres are abundant near the representation of the visual midline. It was suggested that they re-establish the functional continuity of the visual field by controlling the dynamics of the responses in the two hemispheres. Methods/Principal Findings To understand if and how the interactions between the two hemispheres participate in processing visual stimuli, the synchronization of responses to identical or different moving gratings in the two hemi-fields were studied in anesthetized ferrets. The responses were recorded by multiple electrodes in the primary visual areas and the synchronization of local field potentials across the electrodes were analyzed with a recent method derived from dynamical system theory. Inactivating the visual areas of one hemisphere modulated the synchronization of the stimulus-driven activity in the other hemisphere. The modulation was stimulus-specific and was consistent with the fine morphology of callosal axons in particular with the spatio-temporal pattern of activity that axonal geometry can generate. Conclusions/Significance These findings describe a new kind of interaction between the cerebral hemispheres and highlight the role of axonal geometry in modulating aspects of cortical dynamics responsible for stimulus detection and/or categorization. PMID:18074012

  13. BOLDSync: a MATLAB-based toolbox for synchronized stimulus presentation in functional MRI.

    PubMed

    Joshi, Jitesh; Saharan, Sumiti; Mandal, Pravat K

    2014-02-15

    Precise and synchronized presentation of paradigm stimuli in functional magnetic resonance imaging (fMRI) is central to obtaining accurate information about brain regions involved in a specific task. In this manuscript, we present a new MATLAB-based toolbox, BOLDSync, for synchronized stimulus presentation in fMRI. BOLDSync provides a user friendly platform for design and presentation of visual, audio, as well as multimodal audio-visual (AV) stimuli in functional imaging experiments. We present simulation experiments that demonstrate the millisecond synchronization accuracy of BOLDSync, and also illustrate the functionalities of BOLDSync through application to an AV fMRI study. BOLDSync gains an advantage over other available proprietary and open-source toolboxes by offering a user friendly and accessible interface that affords both precision in stimulus presentation and versatility across various types of stimulus designs and system setups. BOLDSync is a reliable, efficient, and versatile solution for synchronized stimulus presentation in fMRI study. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Left neglect dyslexia and the effect of stimulus duration.

    PubMed

    Arduino, Lisa S; Vallar, Giuseppe; Burani, Cristina

    2006-01-01

    The present study investigated the effects of the duration of the stimulus on the reading performance of right-brain-damaged patients with left neglect dyslexia. Three Italian patients read aloud words and nonwords, under conditions of unlimited time of stimulus exposure and of timed presentation. In the untimed condition, the majority of the patients' errors involved the left side of the letter string (i.e., neglect dyslexia errors). Conversely, in the timed condition, although the overall level of performance decreased, errors were more evenly distributed across the whole letter string (i.e., visual - nonlateralized - errors). This reduction of neglect errors with a reduced time of presentation of the stimulus may reflect the read out of elements of the letter string from a preserved visual storage component, such as iconic memory. Conversely, a time-unlimited presentation of the stimulus may bring about the rightward bias that characterizes the performance of neglect patients, possibly by a capture of the patients' attention by the final (rightward) letters of the string.

  15. Standard deviation of luminance distribution affects lightness and pupillary response.

    PubMed

    Kanari, Kei; Kaneko, Hirohiko

    2014-12-01

    We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination.

  16. Attention to the Color of a Moving Stimulus Modulates Motion-Signal Processing in Macaque Area MT: Evidence for a Unified Attentional System.

    PubMed

    Katzner, Steffen; Busse, Laura; Treue, Stefan

    2009-01-01

    Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.

  17. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  18. Incongruent Abstract Stimulus-Response Bindings Result in Response Interference: fMRI and EEG Evidence from Visual Object Classification Priming

    ERIC Educational Resources Information Center

    Horner, Aidan J.; Henson, Richard N.

    2012-01-01

    Stimulus repetition often leads to facilitated processing, resulting in neural decreases (repetition suppression) and faster RTs (repetition priming). Such repetition-related effects have been attributed to the facilitation of repeated cognitive processes and/or the retrieval of previously encoded stimulus-response (S-R) bindings. Although…

  19. Stimulus and optode placement effects on functional near-infrared spectroscopy of visual cortex

    PubMed Central

    Kashou, Nasser H.; Giacherio, Brenna M.

    2016-01-01

    Abstract. Functional near-infrared spectroscopy has yet to be implemented as a stand-alone technique within an ophthalmology clinical setting, despite its promising advantages. The present study aims to further investigate reliability of visual cortical signals. This was achieved by: (1) assessing the effects of optode placements using the 10–20 International System of Electrode Placement consisting of 28 channels, (2) determining effects of stimulus size on response, and (3) evaluating response variability as a result of cap placement across three sessions. Ten participants with mean age 23.8±4.8 years (five male) and varying types of hair color and thickness were recruited. Visual stimuli of black-and-white checkerboards, reversing at a frequency of 7.5 Hz were presented. Visual angles of individual checker squares included 1 deg, 2 deg, 5 deg, 9 deg, and 18 deg. The number of channels that showed response was analyzed for each participant, stimulus size, and session. 1-deg stimulus showed the greatest activation. One of three data collection sessions for each participant gave different results (p<0.05). Hair color and thickness each had an effect upon the overall HbO (p<0.05), while only color had a significant effect for HbD (p<0.05). A reliable level of robustness and consistency is still required for clinical implementation and assessment of visual dysfunction. PMID:27335887

  20. Primary and multisensory cortical activity is correlated with audiovisual percepts.

    PubMed

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven

    2010-04-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.

  1. Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts

    PubMed Central

    Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven

    2012-01-01

    Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040

  2. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    PubMed

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  3. Anti-extinction in the tactile modality.

    PubMed

    White, Rebekah C; Aimola Davies, Anne M

    2013-01-01

    Patients with extinction fail to report a contralesional stimulus when it is presented at the same time as an ipsilesional stimulus, and patients with unilateral neglect fail to report a contralesional stimulus even when there is no competing ipsilesional stimulus. Whereas extinction and neglect are common following stroke, the related phenomenon of anti-extinction is rare--there are four cases of anti-extinction in the literature, and all four cases demonstrated anti-extinction in the visual modality. Patients with anti-extinction do report a contralesional stimulus when it is presented at the same time as an ipsilesional stimulus; but, like patients with neglect, they fail to report a contralesional stimulus when there is no competing ipsilesional stimulus. We present the first case ofanti-extinction in the tactile modality.

  4. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  5. Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching

    PubMed Central

    Göschl, Florian; Engel, Andreas K.; Friese, Uwe

    2014-01-01

    Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102

  6. Figure-ground processing during fixational saccades in V1: indication for higher-order stability.

    PubMed

    Gilad, Ariel; Pesoa, Yair; Ayzenshtat, Inbal; Slovin, Hamutal

    2014-02-26

    In a typical visual scene we continuously perceive a "figure" that is segregated from the surrounding "background" despite ongoing microsaccades and small saccades that are performed when attempting fixation (fixational saccades [FSs]). Previously reported neuronal correlates of figure-ground (FG) segregation in the primary visual cortex (V1) showed enhanced activity in the "figure" along with suppressed activity in the noisy "background." However, it is unknown how this FG modulation in V1 is affected by FSs. To investigate this question, we trained two monkeys to detect a contour embedded in a noisy background while simultaneously imaging V1 using voltage-sensitive dyes. During stimulus presentation, the monkeys typically performed 1-3 FSs, which displaced the contour over the retina. Using eye position and a 2D analytical model to map the stimulus onto V1, we were able to compute FG modulation before and after each FS. On the spatial cortical scale, we found that, after each FS, FG modulation follows the stimulus retinal displacement and "hops" within the V1 retinotopic map, suggesting visual instability. On the temporal scale, FG modulation is initiated in the new retinotopic position before it disappeared from the old retinotopic position. Moreover, the FG modulation developed faster after an FS, compared with after stimulus onset, which may contribute to visual stability of FG segregation, along the timeline of stimulus presentation. Therefore, despite spatial discontinuity of FG modulation in V1, the higher-order stability of FG modulation along time may enable our stable and continuous perception.

  7. The Role of Visual and Semantic Properties in the Emergence of Category-Specific Patterns of Neural Response in the Human Brain.

    PubMed

    Coggan, David D; Baker, Daniel H; Andrews, Timothy J

    2016-01-01

    Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80-100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80-100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.

  8. Task set induces dynamic reallocation of resources in visual short-term memory.

    PubMed

    Sheremata, Summer L; Shomstein, Sarah

    2017-08-01

    Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.

  9. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    PubMed

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Design of novel non-contact multimedia controller for disability by using visual stimulus.

    PubMed

    Pan, Jeng-Shyang; Lo, Chi-Chun; Tsai, Shang-Ho; Lin, Bor-Shyh

    2015-12-01

    The design of a novel non-contact multimedia controller is proposed in this study. Nowadays, multimedia controllers are generally used by patients and nursing assistants in the hospital. Conventional multimedia controllers usually involve in manual operation or other physical movements. However, it is more difficult for the disabled patients to operate the conventional multimedia controller by themselves; they might totally depend on others. Different from other multimedia controllers, the proposed system provides a novel concept of controlling multimedia via visual stimuli, without manual operation. The disabled patients can easily operate the proposed multimedia system by focusing on the control icons of a visual stimulus device, where a commercial tablet is used as the visual stimulus device. Moreover, a wearable and wireless electroencephalogram (EEG) acquisition device is also designed and implemented to easily monitor the user's EEG signals in daily life. Finally, the proposed system has been validated. The experimental result shows that the proposed system can effectively measure and extract the EEG feature related to visual stimuli, and its information transfer rate is also good. Therefore, the proposed non-contact multimedia controller exactly provides a good prototype of novel multimedia controlling scheme. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Dissociating neural variability related to stimulus quality and response times in perceptual decision-making.

    PubMed

    Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten

    2018-03-01

    According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Critical role of foreground stimuli in perceiving visually induced self-motion (vection).

    PubMed

    Nakamura, S; Shimojo, S

    1999-01-01

    The effects of a foreground stimulus on vection (illusory perception of self-motion induced by a moving background stimulus) were examined in two experiments. The experiments reveal that the presentation of a foreground pattern with a moving background stimulus may affect vection. The foreground stimulus facilitated vection strength when it remained stationary or moved slowly in the opposite direction to that of the background stimulus. On the other hand, there was a strong inhibition of vection when the foreground stimulus moved slowly with, or quickly against, the background. These results suggest that foreground stimuli, as well as background stimuli, play an important role in perceiving self-motion.

  13. Stimulus Processing and Associative Learning in Wistar and WKHA Rats

    PubMed Central

    Chess, Amy C.; Keene, Christopher S.; Wyzik, Elizabeth C.; Bucci, David J.

    2007-01-01

    This study assessed basic learning and attention abilities in WKHA (Wistar-Kyoto Hyperactive) rats using appetitive conditioning preparations. Two measures of conditioned responding to a visual stimulus, orienting behavior (rearing on the hindlegs) and food cup behavior (placing the head inside the recessed food cup) were measured. In Experiment 1, simple conditioning but not extinction was impaired in WKHA rats compared to Wistar rats. In Experiment 2, non-reinforced presentations of the visual cue preceded the conditioning sessions. WKHA rats displayed less orienting behavior than Wistar rats, but comparable levels of food cup behavior. These data suggest that WKHA rats exhibit specific abnormalities in attentional processing as well as learning stimulus-reward relationships. PMID:15998198

  14. Iconic-memory processing of unfamiliar stimuli by retarded and nonretarded individuals.

    PubMed

    Hornstein, H A; Mosley, J L

    1979-07-01

    The iconic-memory processing of unfamiliar stimuli was undertaken employing a visually cued partial-report procedure and a visual masking procedure. Subjects viewed stimulus arrays consisting of six Chinese characters arranged in a circular pattern for 100 msec. At variable stimulus-onset asynchronies, a teardrop indicator or an annulus was presented for 100 msec. Immediately upon cue offset, the subject was required to recognize the cued stimulus from a card containing a single character. Retarded subjects' performance was comparable to that of MA- and CA-matched subjects. We suggested that earlier reported iconic-memory differences between retarded and nonretarded individuals may be attributable to processes other than iconic memory.

  15. Visual perceptual learning by operant conditioning training follows rules of contingency.

    PubMed

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.

  16. Visual perceptual learning by operant conditioning training follows rules of contingency

    PubMed Central

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984

  17. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  18. Masking disrupts reentrant processing in human visual cortex.

    PubMed

    Fahrenfort, J J; Scholte, H S; Lamme, V A F

    2007-09-01

    In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first. Over the years, authors have typically explained masking by postulating some early disruption process. In these feedforward-type explanations, the mask somehow "catches up" with the target stimulus, disrupting its processing either through lateral or interchannel inhibition. However, studies from recent years indicate that visual perception--and most notably visual awareness itself--may depend strongly on cortico-cortical feedback connections from higher to lower visual areas. This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes. In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions. Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition. Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps. This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures. From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.

  19. Masking of Figure-Ground Texture and Single Targets by Surround Inhibition: A Computational Spiking Model

    PubMed Central

    Supèr, Hans; Romeo, August

    2012-01-01

    A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection. PMID:22393370

  20. More than the Verbal Stimulus Matters: Visual Attention in Language Assessment for People with Aphasia Using Multiple-Choice Image Displays

    ERIC Educational Resources Information Center

    Heuer, Sabine; Ivanova, Maria V.; Hallowell, Brooke

    2017-01-01

    Purpose: Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic…

  1. Internal model of gravity influences configural body processing.

    PubMed

    Barra, Julien; Senot, Patrice; Auclair, Laurent

    2017-01-01

    Human bodies are processed by a configural processing mechanism. Evidence supporting this claim is the body inversion effect, in which inversion impairs recognition of bodies more than other objects. Biomechanical configuration, as well as both visual and embodied expertise, has been demonstrated to play an important role in this effect. Nevertheless, the important factor of body inversion effect may also be linked to gravity orientation since gravity is one of the most fundamental constraints of our biology, behavior, and perception on Earth. The visual presentation of an inverted body in a typical body inversion paradigm turns the observed body upside down but also inverts the implicit direction of visual gravity in the scene. The orientation of visual gravity is then in conflict with the direction of actual gravity and may influence configural processing. To test this hypothesis, we dissociated the orientations of the body and of visual gravity by manipulating body posture. In a pretest we showed that it was possible to turn an avatar upside down (inversion relative to retinal coordinates) without inverting the orientation of visual gravity when the avatar stands on his/her hands. We compared the inversion effect in typical conditions (with gravity conflict when the avatar is upside down) to the inversion effect in conditions with no conflict between visual and physical gravity. The results of our experiment revealed that the inversion effect, as measured by both error rate and reaction time, was strongly reduced when there was no gravity conflict. Our results suggest that when an observed body is upside down (inversion relative to participants' retinal coordinates) but the orientation of visual gravity is not, configural processing of bodies might still be possible. In this paper, we discuss the implications of an internal model of gravity in the configural processing of observed bodies. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Contributions of visual and embodied expertise to body perception.

    PubMed

    Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D

    2012-01-01

    Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.

  3. Steady-state pattern electroretinogram and short-duration transient visual evoked potentials in glaucomatous and healthy eyes.

    PubMed

    Amarasekera, Dilru C; Resende, Arthur F; Waisbourd, Michael; Puri, Sanjeev; Moster, Marlene R; Hark, Lisa A; Katz, L Jay; Fudemberg, Scott J; Mantravadi, Anand V

    2018-01-01

    This study evaluates two rapid electrophysiological glaucoma diagnostic tests that may add a functional perspective to glaucoma diagnosis. This study aimed to determine the ability of two office-based electrophysiological diagnostic tests, steady-state pattern electroretinogram and short-duration transient visual evoked potentials, to discern between glaucomatous and healthy eyes. This is a cross-sectional study in a hospital setting. Forty-one patients with glaucoma and 41 healthy volunteers participated in the study. Steady-state pattern electroretinogram and short-duration transient visual evoked potential testing was conducted in glaucomatous and healthy eyes. A 64-bar-size stimulus with both a low-contrast and high-contrast setting was used to compare steady-state pattern electroretinogram parameters in both groups. A low-contrast and high-contrast checkerboard stimulus was used to measure short-duration transient visual evoked potential parameters in both groups. Steady-state pattern electroretinogram parameters compared were MagnitudeD, MagnitudeD/Magnitude ratio, and the signal-to-noise ratio. Short-duration transient visual evoked potential parameters compared were amplitude and latency. MagnitudeD was significantly lower in glaucoma patients when using a low-contrast (P = 0.001) and high-contrast (P < 0.001) 64-bar-size steady-state pattern electroretinogram stimulus. MagnitudeD/Magnitude ratio and SNR were significantly lower in the glaucoma group when using a high-contrast 64-bar-size stimulus (P < 0.001 and P = 0.010, respectively). Short-duration transient visual evoked potential amplitude and latency were not significantly different between the two groups. Steady-state pattern electroretinogram was effectively able to discern between glaucomatous and healthy eyes. Steady-state pattern electroretinogram may thus have a role as a clinically useful electrophysiological diagnostic tool. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  4. Population Coding of Visual Space: Comparison of Spatial Representations in Dorsal and Ventral Pathways

    PubMed Central

    Sereno, Anne B.; Lehky, Sidney R.

    2011-01-01

    Although the representation of space is as fundamental to visual processing as the representation of shape, it has received relatively little attention from neurophysiological investigations. In this study we characterize representations of space within visual cortex, and examine how they differ in a first direct comparison between dorsal and ventral subdivisions of the visual pathways. Neural activities were recorded in anterior inferotemporal cortex (AIT) and lateral intraparietal cortex (LIP) of awake behaving monkeys, structures associated with the ventral and dorsal visual pathways respectively, as a stimulus was presented at different locations within the visual field. In spatially selective cells, we find greater modulation of cell responses in LIP with changes in stimulus position. Further, using a novel population-based statistical approach (namely, multidimensional scaling), we recover the spatial map implicit within activities of neural populations, allowing us to quantitatively compare the geometry of neural space with physical space. We show that a population of spatially selective LIP neurons, despite having large receptive fields, is able to almost perfectly reconstruct stimulus locations within a low-dimensional representation. In contrast, a population of AIT neurons, despite each cell being spatially selective, provide less accurate low-dimensional reconstructions of stimulus locations. They produce instead only a topologically (categorically) correct rendition of space, which nevertheless might be critical for object and scene recognition. Furthermore, we found that the spatial representation recovered from population activity shows greater translation invariance in LIP than in AIT. We suggest that LIP spatial representations may be dimensionally isomorphic with 3D physical space, while in AIT spatial representations may reflect a more categorical representation of space (e.g., “next to” or “above”). PMID:21344010

  5. Effects of set-size and selective spatial attention on motion processing.

    PubMed

    Dobkins, K R; Bosworth, R G

    2001-05-01

    In order to investigate the effects of divided attention and selective spatial attention on motion processing, we obtained direction-of-motion thresholds using a stochastic motion display under various attentional manipulations and stimulus durations (100-600 ms). To investigate divided attention, we compared motion thresholds obtained when a single motion stimulus was presented in the visual field (set-size=1) to those obtained when the motion stimulus was presented amongst three confusable noise distractors (set-size=4). The magnitude of the observed detriment in performance with an increase in set-size from 1 to 4 could be accounted for by a simple decision model based on signal detection theory, which assumes that attentional resources are not limited in capacity. To investigate selective attention, we compared motion thresholds obtained when a valid pre-cue alerted the subject to the location of the to-be-presented motion stimulus to those obtained when no pre-cue was provided. As expected, the effect of pre-cueing was large when the visual field contained noise distractors, an effect we attribute to "noise reduction" (i.e. the pre-cue allows subjects to exclude irrelevant distractors that would otherwise impair performance). In the single motion stimulus display, we found a significant benefit of pre-cueing only at short durations (< or =150 ms), a result that can potentially be explained by a "time-to-orient" hypothesis (i.e. the pre-cue improves performance by eliminating the time it takes to orient attention to a peripheral stimulus at its onset, thereby increasing the time spent processing the stimulus). Thus, our results suggest that the visual motion system can analyze several stimuli simultaneously without limitations on sensory processing per se, and that spatial pre-cueing serves to reduce the effects of distractors and perhaps increase the effective processing time of the stimulus.

  6. Stimulus selectivity and response latency in putative inhibitory and excitatory neurons of the primate inferior temporal cortex

    PubMed Central

    Mruczek, Ryan E. B.

    2012-01-01

    The cerebral cortex is composed of many distinct classes of neurons. Numerous studies have demonstrated corresponding differences in neuronal properties across cell types, but these comparisons have largely been limited to conditions outside of awake, behaving animals. Thus the functional role of the various cell types is not well understood. Here, we investigate differences in the functional properties of two widespread and broad classes of cells in inferior temporal cortex of macaque monkeys: inhibitory interneurons and excitatory projection cells. Cells were classified as putative inhibitory or putative excitatory neurons on the basis of their extracellular waveform characteristics (e.g., spike duration). Consistent with previous intracellular recordings in cortical slices, putative inhibitory neurons had higher spontaneous firing rates and higher stimulus-evoked firing rates than putative excitatory neurons. Additionally, putative excitatory neurons were more susceptible to spike waveform adaptation following very short interspike intervals. Finally, we compared two functional properties of each neuron's stimulus-evoked response: stimulus selectivity and response latency. First, putative excitatory neurons showed stronger stimulus selectivity compared with putative inhibitory neurons. Second, putative inhibitory neurons had shorter response latencies compared with putative excitatory neurons. Selectivity differences were maintained and latency differences were enhanced during a visual search task emulating more natural viewing conditions. Our results suggest that short-latency inhibitory responses are likely to sculpt visual processing in excitatory neurons, yielding a sparser visual representation. PMID:22933717

  7. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  8. Scopolamine effects on visual discrimination: modifications related to stimulus control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, H.L.

    1975-01-01

    Stumptail monkeys (Macaca arctoides) performed a discrete trial, three-choice visual discrimination. The discrimination behavior was controlled by the shape of the visual stimuli. Strength of the stimuli in controlling behavior was systematically related to a physical property of the stimuli, luminance. Low luminance provided weak control, resulting in a low accuracy of discrimination, a low response probability and maximal sensitivity to scopolamine (7.5-60 ..mu..g/kg). In contrast, high luminance provided strong control of behavior and attenuated the effects of scopolamine. Methylscopolamine had no effect in doses of 30 to 90 ..mu..g/kg. Scopolamine effects resembled the effects of reducing stimulus control inmore » undrugged monkeys. Since behavior under weak control seems to be especially sensitive to drugs, manipulations of stimulus control may be particularly useful whenever determination of the minimally-effective dose is important, as in behavioral toxicology. Present results are interpreted as specific visual effects of the drug, since nonsensory factors such as base-line response rate, reinforcement schedule, training history, motor performance and motivation were controlled. Implications for state-dependent effects of drugs are discussed.« less

  9. Bingo! Externally-Supported Performance Intervention for Deficient Visual Search in Normal Aging, Parkinson’s Disease and Alzheimer’s Disease

    PubMed Central

    Laudate, Thomas M.; Neargarder, Sandy; Dunne, Tracy E.; Sullivan, Karen D.; Joshi, Pallavi; Gilmore, Grover C.; Riedel, Tatiana M.; Cronin-Golomb, Alice

    2011-01-01

    External support may improve task performance regardless of an individual’s ability to compensate for cognitive deficits through internally-generated mechanisms. We investigated if performance of a complex, familiar visual search task (the game of bingo) could be enhanced in groups with suboptimal vision by providing external support through manipulation of task stimuli. Participants were 19 younger adults, 14 individuals with probable Alzheimer’s disease (AD), 13 AD-matched healthy adults, 17 non-demented individuals with Parkinson’s disease (PD), and 20 PD-matched healthy adults. We varied stimulus contrast, size, and visual complexity during game play. The externally-supported performance interventions of increased stimulus size and decreased complexity resulted in improvements in performance by all groups. Performance improvement through increased stimulus size and decreased complexity was demonstrated by all groups. AD also obtained benefit from increasing contrast, presumably by compensating for their contrast sensitivity deficit. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance. PMID:22066941

  10. Moderation of Stimulus Material on the Prediction of IQ with Infants' Performance in the Visual Expectation Paradigm: Do Greebles Make the Task More Challenging?

    ERIC Educational Resources Information Center

    Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vöhringer, Isabel A.; Suhrke, Janina; Poloczek, Sonja; Freitag, Claudia; Lamm, Bettina; Teiser, Johanna; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun

    2015-01-01

    The objective of this study was to examine the role of the stimulus material for the prediction of later IQ by early learning measures in the Visual Expectation Paradigm (VExP). The VExP was assessed at 9?months using two types of stimuli, Greebles and human faces. Greebles were assumed to be associated with a higher load on working memory in…

  11. Masking reduces orientation selectivity in rat visual cortex

    PubMed Central

    Alwis, Dasuni S.; Richards, Katrina L.

    2016-01-01

    In visual masking the perception of a target stimulus is impaired by a preceding (forward) or succeeding (backward) mask stimulus. The illusion is of interest because it allows uncoupling of the physical stimulus, its neuronal representation, and its perception. To understand the neuronal correlates of masking, we examined how masks affected the neuronal responses to oriented target stimuli in the primary visual cortex (V1) of anesthetized rats (n = 37). Target stimuli were circular gratings with 12 orientations; mask stimuli were plaids created as a binarized sum of all possible target orientations. Spatially, masks were presented either overlapping or surrounding the target. Temporally, targets and masks were presented for 33 ms, but the stimulus onset asynchrony (SOA) of their relative appearance was varied. For the first time, we examine how spatially overlapping and center-surround masking affect orientation discriminability (rather than visibility) in V1. Regardless of the spatial or temporal arrangement of stimuli, the greatest reductions in firing rate and orientation selectivity occurred for the shortest SOAs. Interestingly, analyses conducted separately for transient and sustained target response components showed that changes in orientation selectivity do not always coincide with changes in firing rate. Given the near-instantaneous reductions observed in orientation selectivity even when target and mask do not spatially overlap, we suggest that monotonic visual masking is explained by a combination of neural integration and lateral inhibition. PMID:27535373

  12. Hummingbirds control hovering flight by stabilizing visual motion.

    PubMed

    Goller, Benjamin; Altshuler, Douglas L

    2014-12-23

    Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.

  13. Relativistic compression and expansion of experiential time in the left and right space.

    PubMed

    Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano

    2008-03-05

    Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.

  14. Characteristics of implicit chaining in cotton-top tamarins (Saguinus oedipus).

    PubMed

    Locurto, Charles; Gagne, Matthew; Nutile, Lauren

    2010-07-01

    In human cognition there has been considerable interest in observing the conditions under which subjects learn material without explicit instructions to learn. In the present experiments, we adapted this issue to nonhumans by asking what subjects learn in the absence of explicit reinforcement for correct responses. Two experiments examined the acquisition of sequence information by cotton-top tamarins (Saguinus oedipus) when such learning was not demanded by the experimental contingencies. An implicit chaining procedure was used in which visual stimuli were presented serially on a touchscreen. Subjects were required to touch one stimulus to advance to the next stimulus. Stimulus presentations followed a pattern, but learning the pattern was not necessary for reinforcement. In Experiment 1 the chain consisted of five different visual stimuli that were presented in the same order on each trial. Each stimulus could occur at any one of six touchscreen positions. In Experiment 2 the same visual element was presented serially in the same five locations on each trial, thereby allowing a behavioral pattern to be correlated with the visual pattern. In this experiment two new tests, a Wild-Card test and a Running-Start test, were used to assess what was learned in this procedure. Results from both experiments indicated that tamarins acquired more information from an implicit chain than was required by the contingencies of reinforcement. These results contribute to the developing literature on nonhuman analogs of implicit learning.

  15. TOPICAL REVIEW: Prosthetic interfaces with the visual system: biological issues

    NASA Astrophysics Data System (ADS)

    Cohen, Ethan D.

    2007-06-01

    The design of effective visual prostheses for the blind represents a challenge for biomedical engineers and neuroscientists. Significant progress has been made in the miniaturization and processing power of prosthesis electronics; however development lags in the design and construction of effective machine brain interfaces with visual system neurons. This review summarizes what has been learned about stimulating neurons in the human and primate retina, lateral geniculate nucleus and visual cortex. Each level of the visual system presents unique challenges for neural interface design. Blind patients with the retinal degenerative disease retinitis pigmentosa (RP) are a common population in clinical trials of visual prostheses. The visual performance abilities of normals and RP patients are compared. To generate pattern vision in blind patients, the visual prosthetic interface must effectively stimulate the retinotopically organized neurons in the central visual field to elicit patterned visual percepts. The development of more biologically compatible methods of stimulating visual system neurons is critical to the development of finer spatial percepts. Prosthesis electrode arrays need to adapt to different optimal stimulus locations, stimulus patterns, and patient disease states.

  16. The time course of shape discrimination in the human brain.

    PubMed

    Ales, Justin M; Appelbaum, L Gregory; Cottereau, Benoit R; Norcia, Anthony M

    2013-02-15

    The lateral occipital cortex (LOC) activates selectively to images of intact objects versus scrambled controls, is selective for the figure-ground relationship of a scene, and exhibits at least some degree of invariance for size and position. Because of these attributes, it is considered to be a crucial part of the object recognition pathway. Here we show that human LOC is critically involved in perceptual decisions about object shape. High-density EEG was recorded while subjects performed a threshold-level shape discrimination task on texture-defined figures segmented by either phase or orientation cues. The appearance or disappearance of a figure region from a uniform background generated robust visual evoked potentials throughout retinotopic cortex as determined by inverse modeling of the scalp voltage distribution. Contrasting responses from trials containing shape changes that were correctly detected (hits) with trials in which no change occurred (correct rejects) revealed stimulus-locked, target-selective activity in the occipital visual areas LOC and V4 preceding the subject's response. Activity that was locked to the subjects' reaction time was present in the LOC. Response-locked activity in the LOC was determined to be related to shape discrimination for several reasons: shape-selective responses were silenced when subjects viewed identical stimuli but their attention was directed away from the shapes to a demanding letter discrimination task; shape-selectivity was present across four different stimulus configurations used to define the figure; LOC responses correlated with participants' reaction times. These results indicate that decision-related activity is present in the LOC when subjects are engaged in threshold-level shape discriminations. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. The impact of sensorimotor experience on affective evaluation of dance

    PubMed Central

    Kirsch, Louise P.; Drommelschmidt, Kim A.; Cross, Emily S.

    2013-01-01

    Past research demonstrates that we are more likely to positively evaluate a stimulus if we have had previous experience with that stimulus. This has been shown for judgment of faces, architecture, artworks and body movements. In contrast, other evidence suggests that this relationship can also work in the inverse direction, at least in the domain of watching dance. Specifically, it has been shown that in certain contexts, people derive greater pleasure from watching unfamiliar movements they would not be able to physically reproduce compared to simpler, familiar actions they could physically reproduce. It remains unknown, however, how different kinds of experience with complex actions, such as dance, might change observers' affective judgments of these movements. Our aim was to clarify the relationship between experience and affective evaluation of whole body movements. In a between-subjects design, participants received either physical dance training with a video game system, visual and auditory experience or auditory experience only. Participants' aesthetic preferences for dance stimuli were measured before and after the training sessions. Results show that participants from the physical training group not only improved their physical performance of the dance sequences, but also reported higher enjoyment and interest in the stimuli after training. This suggests that physically learning particular movements leads to greater enjoyment while observing them. These effects are not simply due to increased familiarity with audio or visual elements of the stimuli, as the other two training groups showed no increase in aesthetic ratings post-training. We suggest these results support an embodied simulation account of aesthetics, and discuss how the present findings contribute to a better understanding of the shaping of preferences by sensorimotor experience. PMID:24027511

  18. Reconciling discrepant findings for P3 brain response in criminal psychopathy through reference to the concept of externalizing proneness.

    PubMed

    Venables, Noah C; Patrick, Christopher J

    2014-05-01

    We sought to address inconsistencies in the literature on amplitude of P3 brain potential response in offenders diagnosed with psychopathy. These inconsistencies contrast with the reliable finding of reduced P3 in relation to externalizing tendencies, which overlap with impulsive-antisocial features of psychopathy, as distinguished from the affective-interpersonal features. Employing a sample of incarcerated male offenders (N = 154) who completed the Psychopathy Checklist-Revised along with a three-stimulus visual oddball task, we tested the hypothesis that impulsive-antisocial features of psychopathy would selectively exhibit an inverse relationship with P3 amplitude. Clear support for this hypothesis was obtained. Our findings clarify the discrepant findings regarding psychopathy and P3, and establish P3 as a neurophysiological point of contact between psychopathy and externalizing proneness from the broader psychopathology literature. Copyright © 2014 Society for Psychophysiological Research.

  19. Reconciling discrepant findings for P3 brain response in criminal psychopathy through reference to the concept of externalizing proneness

    PubMed Central

    Venables, Noah C.; Patrick, Christopher J.

    2014-01-01

    We sought to address inconsistencies in the literature on amplitude of P3 brain potential response in offenders diagnosed with psychopathy. These inconsistencies contrast with the reliable finding of reduced P3 in relation to externalizing tendencies, which overlap with impulsive-antisocial features of psychopathy, as distinguished from the affective-interpersonal features. Employing a sample of incarcerated male offenders (N=154) who completed Hare’s (2003) Psychopathy Checklist-Revised along with a three-stimulus visual oddball task, we tested the hypothesis that impulsive-antisocial features of psychopathy would selectively exhibit an inverse relationship with P3 amplitude. Clear support for this hypothesis was obtained. Our findings clarify the discrepant findings regarding psychopathy and P3, and establish P3 as a neurophysiological point of contact between psychopathy and externalizing proneness from the broader psychopathology literature. PMID:24579849

  20. Impaired Visual Motor Coordination in Obese Adults.

    PubMed

    Gaul, David; Mat, Arimin; O'Shea, Donal; Issartel, Johann

    2016-01-01

    Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly ( p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability ( p < 0.05), and a larger amplitude ( p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.

  1. Visual evoked potentials through night vision goggles.

    PubMed

    Rabin, J

    1994-04-01

    Night vision goggles (NVG's) have widespread use in military and civilian environments. NVG's amplify ambient illumination making performance possible when there is insufficient illumination for normal vision. While visual performance through NVG's is commonly assessed by measuring threshold functions such as visual acuity, few attempts have been made to assess vision through NVG's at suprathreshold levels of stimulation. Such information would be useful to better understand vision through NVG's across a range of stimulus conditions. In this study visual evoked potentials (VEP's) were used to evaluate vision through NVG's across a range of stimulus contrasts. The amplitude and latency of the VEP varied linearly with log contrast. A comparison of VEP's recorded with and without NVG's was used to estimate contrast attenuation through the device. VEP's offer an objective, electrophysiological tool to assess visual performance through NVG's at both threshold and suprathreshold levels of visual stimulation.

  2. Omission P3 after voluntary action indexes the formation of action-driven prediction.

    PubMed

    Kimura, Motohiro; Takeda, Yuji

    2018-02-01

    When humans frequently experience a certain sensory effect after a certain action, a bidirectional association between neural representations of the action and the sensory effect is rapidly acquired, which enables action-driven prediction of the sensory effect. The present study aimed to test whether or not omission P3, an event-related brain potential (ERP) elicited by the sudden omission of a sensory effect, is sensitive to the formation of action-driven prediction. For this purpose, we examined how omission P3 is affected by the number of possible visual effects. In four separate blocks (1-, 2-, 4-, and 8-stimulus blocks), participants successively pressed a right button at an interval of about 1s. In all blocks, each button press triggered a bar on a display (a bar with square edges, 85%; a bar with round edges, 5%), but occasionally did not (sudden omission of a visual effect, 10%). Participants were required to press a left button when a bar with round edges appeared. In the 1-stimulus block, the orientation of the bar was fixed throughout the block; in the 2-, 4-, and 8-stimulus blocks, the orientation was randomly varied among two, four, and eight possibilities, respectively. Omission P3 in the 1-stimulus block was greater than those in the 2-, 4-, and 8-stimulus blocks; there were no significant differences among the 2-, 4-, and 8-stimulus blocks. This binary pattern nicely fits the limitation in the acquisition of action-effect association; although an association between an action and one visual effect is easily acquired, associations between an action and two or more visual effects cannot be acquired concurrently. Taken together, the present results suggest that omission P3 is highly sensitive to the formation of action-driven prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Dynamic visual noise affects visual short-term memory for surface color, but not spatial location.

    PubMed

    Dent, Kevin

    2010-01-01

    In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.

  4. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems.

    PubMed

    Zeki, Semir

    2016-10-01

    Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Feature-selective attention in healthy old age: a selective decline in selective attention?

    PubMed

    Quigley, Cliodhna; Müller, Matthias M

    2014-02-12

    Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.

  6. Novelty Enhances Visual Salience Independently of Reward in the Parietal Lobe

    PubMed Central

    Foley, Nicholas C.; Jangraw, David C.; Peck, Christopher

    2014-01-01

    Novelty modulates sensory and reward processes, but it remains unknown how these effects interact, i.e., how the visual effects of novelty are related to its motivational effects. A widespread hypothesis, based on findings that novelty activates reward-related structures, is that all the effects of novelty are explained in terms of reward. According to this idea, a novel stimulus is by default assigned high reward value and hence high salience, but this salience rapidly decreases if the stimulus signals a negative outcome. Here we show that, contrary to this idea, novelty affects visual salience in the monkey lateral intraparietal area (LIP) in ways that are independent of expected reward. Monkeys viewed peripheral visual cues that were novel or familiar (received few or many exposures) and predicted whether the trial will have a positive or a negative outcome—i.e., end in a reward or a lack of reward. We used a saccade-based assay to detect whether the cues automatically attracted or repelled attention from their visual field location. We show that salience—measured in saccades and LIP responses—was enhanced by both novelty and positive reward associations, but these factors were dissociable and habituated on different timescales. The monkeys rapidly recognized that a novel stimulus signaled a negative outcome (and withheld anticipatory licking within the first few presentations), but the salience of that stimulus remained high for multiple subsequent presentations. Therefore, novelty can provide an intrinsic bonus for attention that extends beyond the first presentation and is independent of physical rewards. PMID:24899716

  7. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    PubMed

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Attention improves encoding of task-relevant features in the human visual cortex.

    PubMed

    Jehee, Janneke F M; Brady, Devin K; Tong, Frank

    2011-06-01

    When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.

  9. Novelty enhances visual salience independently of reward in the parietal lobe.

    PubMed

    Foley, Nicholas C; Jangraw, David C; Peck, Christopher; Gottlieb, Jacqueline

    2014-06-04

    Novelty modulates sensory and reward processes, but it remains unknown how these effects interact, i.e., how the visual effects of novelty are related to its motivational effects. A widespread hypothesis, based on findings that novelty activates reward-related structures, is that all the effects of novelty are explained in terms of reward. According to this idea, a novel stimulus is by default assigned high reward value and hence high salience, but this salience rapidly decreases if the stimulus signals a negative outcome. Here we show that, contrary to this idea, novelty affects visual salience in the monkey lateral intraparietal area (LIP) in ways that are independent of expected reward. Monkeys viewed peripheral visual cues that were novel or familiar (received few or many exposures) and predicted whether the trial will have a positive or a negative outcome--i.e., end in a reward or a lack of reward. We used a saccade-based assay to detect whether the cues automatically attracted or repelled attention from their visual field location. We show that salience--measured in saccades and LIP responses--was enhanced by both novelty and positive reward associations, but these factors were dissociable and habituated on different timescales. The monkeys rapidly recognized that a novel stimulus signaled a negative outcome (and withheld anticipatory licking within the first few presentations), but the salience of that stimulus remained high for multiple subsequent presentations. Therefore, novelty can provide an intrinsic bonus for attention that extends beyond the first presentation and is independent of physical rewards. Copyright © 2014 the authors 0270-6474/14/347947-11$15.00/0.

  10. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V

    PubMed Central

    Zamba, Gideon K. D.; Artes, Paul H.

    2018-01-01

    Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822

  11. The Neural Basis of Taste-visual Modal Conflict Control in Appetitive and Aversive Gustatory Context.

    PubMed

    Xiao, Xiao; Dupuis-Roy, Nicolas; Jiang, Jun; Du, Xue; Zhang, Mingmin; Zhang, Qinglin

    2018-02-21

    The functional magnetic resonance imaging (fMRI) technique was used to investigate brain activations related to conflict control in a taste-visual cross-modal pairing task. On each trial, participants had to decide whether the taste of a gustatory stimulus matched or did not match the expected taste of the food item depicted in an image. There were four conditions: Negative match (NM; sour gustatory stimulus and image of sour food), negative mismatch (NMM; sour gustatory stimulus and image of sweet food), positive match (PM; sweet gustatory stimulus and image of sweet food), positive mismatch (PMM; sweet gustatory stimulus and image of sour food). Blood oxygenation level-dependent (BOLD) contrasts between the NMM and the NM conditions revealed an increased activity in the middle frontal gyrus (MFG) (BA 6), the lingual gyrus (LG) (BA 18), and the postcentral gyrus. Furthermore, the NMM minus NM BOLD differences observed in the MFG were correlated with the NMM minus NM differences in response time. These activations were specifically associated with conflict control during the aversive gustatory stimulation. BOLD contrasts between the PMM and the PM condition revealed no significant positive activation, which supported the hypothesis that the human brain is especially sensitive to aversive stimuli. Altogether, these results suggest that the MFG is associated with the taste-visual cross-modal conflict control. A possible role of the LG as an information conflict detector at an early perceptual stage is further discussed, along with a possible involvement of the postcentral gyrus in the processing of the taste-visual cross-modal sensory contrast. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Dissociable effects of inter-stimulus interval and presentation duration on rapid face categorization.

    PubMed

    Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno

    2018-04-01

    Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Spatial summation revealed in the earliest visual evoked component C1 and the effect of attention on its linearity.

    PubMed

    Chen, Juan; Yu, Qing; Zhu, Ziyun; Peng, Yujia; Fang, Fang

    2016-01-01

    In natural scenes, multiple objects are usually presented simultaneously. How do specific areas of the brain respond to multiple objects based on their responses to each individual object? Previous functional magnetic resonance imaging (fMRI) studies have shown that the activity induced by a multiobject stimulus in the primary visual cortex (V1) can be predicted by the linear or nonlinear sum of the activities induced by its component objects. However, there has been little evidence from electroencephelogram (EEG) studies so far. Here we explored how V1 responded to multiple objects by comparing the EEG signals evoked by a three-grating stimulus with those evoked by its two components (the central grating and 2 flanking gratings). We focused on the earliest visual component C1 (onset latency of ∼50 ms) because it has been shown to reflect the feedforward responses of neurons in V1. We found that when the stimulus was unattended, the amplitude of the C1 evoked by the three-grating stimulus roughly equaled the sum of the amplitudes of the C1s evoked by its two components, regardless of the distances between these gratings. When the stimulus was attended, this linear spatial summation existed only when the three gratings were far apart from each other. When the three gratings were close to each other, the spatial summation became compressed. These results suggest that the earliest visual responses in V1 follow a linear summation rule when attention is not involved and that attention can affect the earliest interactions between multiple objects. Copyright © 2016 the American Physiological Society.

  14. Unilateral visual neglect overcome by cues implicit in stimulus arrays.

    PubMed Central

    Kartsounis, L D; Warrington, E K

    1989-01-01

    The case of a man with a right hemisphere lesion and with evidence of left-sided visuospatial neglect is reported. On a variety of verbal and nonverbal tasks his performance was significantly modified by information implicit in stimulus configurations. Neglect deficits were present on tests involving spatially distinct or meaningless stimulus arrays but almost absent when stimuli were continuous or meaningfully integrated. PMID:2592968

  15. High blood pressure and visual sensitivity

    NASA Astrophysics Data System (ADS)

    Eisner, Alvin; Samples, John R.

    2003-09-01

    The study had two main purposes: (1) to determine whether the foveal visual sensitivities of people treated for high blood pressure (vascular hypertension) differ from the sensitivities of people who have not been diagnosed with high blood pressure and (2) to understand how visual adaptation is related to standard measures of systemic cardiovascular function. Two groups of middle-aged subjects-hypertensive and normotensive-were examined with a series of test/background stimulus combinations. All subjects met rigorous inclusion criteria for excellent ocular health. Although the visual sensitivities of the two subject groups overlapped extensively, the age-related rate of sensitivity loss was, for some measures, greater for the hypertensive subjects, possibly because of adaptation differences between the two groups. Overall, the degree of steady-state sensitivity loss resulting from an increase of background illuminance (for 580-nm backgrounds) was slightly less for the hypertensive subjects. Among normotensive subjects, the ability of a bright (3.8-log-td), long-wavelength (640-nm) adapting background to selectively suppress the flicker response of long-wavelength-sensitive (LWS) cones was related inversely to the ratio of mean arterial blood pressure to heart rate. The degree of selective suppression was also related to heart rate alone, and there was evidence that short-term changes of cardiovascular response were important. The results suggest that (1) vascular hypertension, or possibly its treatment, subtly affects visual function even in the absence of eye disease and (2) changes in blood flow affect retinal light-adaptation processes involved in the selective suppression of the flicker response from LWS cones caused by bright, long-wavelength backgrounds.

  16. Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.

    PubMed

    Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S

    2013-05-01

    Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes.

    PubMed

    Whiteley, Louise; Sahani, Maneesh

    2008-03-06

    Perception is an "inverse problem," in which the state of the world must be inferred from the sensory neural activity that results. However, this inference is both ill-posed (Helmholtz, 1856; Marr, 1982) and corrupted by noise (Green & Swets, 1989), requiring the brain to compute perceptual beliefs under conditions of uncertainty. Here we show that human observers performing a simple visual choice task under an externally imposed loss function approach the optimal strategy, as defined by Bayesian probability and decision theory (Berger, 1985; Cox, 1961). In concert with earlier work, this suggests that observers possess a model of their internal uncertainty and can utilize this model in the neural computations that underlie their behavior (Knill & Pouget, 2004). In our experiment, optimal behavior requires that observers integrate the loss function with an estimate of their internal uncertainty rather than simply requiring that they use a modal estimate of the uncertain stimulus. Crucially, they approach optimal behavior even when denied the opportunity to learn adaptive decision strategies based on immediate feedback. Our data thus support the idea that flexible representations of uncertainty are pre-existing, widespread, and can be propagated to decision-making areas of the brain.

  18. Toward a unified model of face and object recognition in the human visual system

    PubMed Central

    Wallis, Guy

    2013-01-01

    Our understanding of the mechanisms and neural substrates underlying visual recognition has made considerable progress over the past 30 years. During this period, accumulating evidence has led many scientists to conclude that objects and faces are recognised in fundamentally distinct ways, and in fundamentally distinct cortical areas. In the psychological literature, in particular, this dissociation has led to a palpable disconnect between theories of how we process and represent the two classes of object. This paper follows a trend in part of the recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other-race effect, the prototype effect, etc.) are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the seemingly very different types of representation associated with faces and objects. PMID:23966963

  19. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  20. Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.

    PubMed

    Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor

    2015-04-01

    Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.

  1. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    PubMed

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  2. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  3. Does Seeing Ice Really Feel Cold? Visual-Thermal Interaction under an Illusory Body-Ownership

    PubMed Central

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed. PMID:23144814

  4. Does seeing ice really feel cold? Visual-thermal interaction under an illusory body-ownership.

    PubMed

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.

  5. Scene segmentation by spike synchronization in reciprocally connected visual areas. I. Local effects of cortical feedback.

    PubMed

    Knoblauch, Andreas; Palm, Günther

    2002-09-01

    To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation-selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with an enlarged synchronization range (fast state). When presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast states, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (designated as tower, castle, and hill peaks). On the fast time scale (tower peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either flat or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings, whereas standard phase-coding models would predict shifted peaks in the case of different objects.

  6. Spatial updating in area LIP is independent of saccade direction.

    PubMed

    Heiser, Laura M; Colby, Carol L

    2006-05-01

    We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.

  7. Imprinting modulates processing of visual information in the visual wulst of chicks.

    PubMed

    Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2006-11-14

    Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.

  8. Imprinting modulates processing of visual information in the visual wulst of chicks

    PubMed Central

    Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2006-01-01

    Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium. PMID:17101060

  9. Single-Cell Analysis of Experience-Dependent Transcriptomic States in Mouse Visual Cortex

    PubMed Central

    Hrvatin, Sinisa; Hochbaum, Daniel R.; Nagy, M. Aurel; Cicconet, Marcelo; Robertson, Keiramarie; Cheadle, Lucas; Zilionis, Rapolas; Ratner, Alex; Borges-Monroy, Rebeca; Klein, Allon M.; Sabatini, Bernardo L.; Greenberg, Michael E.

    2017-01-01

    Activity-dependent transcriptional responses shape cortical function. However, we lack a comprehensive understanding of the diversity of these responses across the full range of cortical cell types, and how these changes contribute to neuronal plasticity and disease. Here we applied high-throughput single-cell RNA-sequencing to investigate the breadth of transcriptional changes that occur across cell types in mouse visual cortex following exposure to light. We identified significant and divergent transcriptional responses to stimulation in each of the 30 cell types characterized, revealing 611 stimulus-responsive genes. Excitatory pyramidal neurons exhibit inter- and intra-laminar heterogeneity in the induction of stimulus responsive genes. Non-neuronal cells demonstrated clear transcriptional responses that may regulate experience-dependent changes in neurovascular coupling and myelination. Together, these results reveal the dynamic landscape of stimulus-dependent transcriptional changes that occur across cell types in visual cortex, which are likely critical for cortical function and may be sites of de-regulation in developmental brain disorders. PMID:29230054

  10. Does attention speed up processing? Decreases and increases of processing rates in visual prior entry.

    PubMed

    Tünnermann, Jan; Petersen, Anders; Scharlau, Ingrid

    2015-03-02

    Selective visual attention improves performance in many tasks. Among others, it leads to "prior entry"--earlier perception of an attended compared to an unattended stimulus. Whether this phenomenon is purely based on an increase of the processing rate of the attended stimulus or if a decrease in the processing rate of the unattended stimulus also contributes to the effect is, up to now, unanswered. Here we describe a novel approach to this question based on Bundesen's Theory of Visual Attention, which we use to overcome the limitations of earlier prior-entry assessment with temporal order judgments (TOJs) that only allow relative statements regarding the processing speed of attended and unattended stimuli. Prevalent models of prior entry in TOJs either indirectly predict a pure acceleration or cannot model the difference between acceleration and deceleration. In a paradigm that combines a letter-identification task with TOJs, we show that indeed acceleration of the attended and deceleration of the unattended stimuli conjointly cause prior entry. © 2015 ARVO.

  11. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-01-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.

  12. Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex.

    PubMed

    Khan, Adil G; Poort, Jasper; Chadwick, Angus; Blot, Antonin; Sahani, Maneesh; Mrsic-Flogel, Thomas D; Hofer, Sonja B

    2018-06-01

    How learning enhances neural representations for behaviorally relevant stimuli via activity changes of cortical cell types remains unclear. We simultaneously imaged responses of pyramidal cells (PYR) along with parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) inhibitory interneurons in primary visual cortex while mice learned to discriminate visual patterns. Learning increased selectivity for task-relevant stimuli of PYR, PV and SOM subsets but not VIP cells. Strikingly, PV neurons became as selective as PYR cells, and their functional interactions reorganized, leading to the emergence of stimulus-selective PYR-PV ensembles. Conversely, SOM activity became strongly decorrelated from the network, and PYR-SOM coupling before learning predicted selectivity increases in individual PYR cells. Thus, learning differentially shapes the activity and interactions of multiple cell classes: while SOM inhibition may gate selectivity changes, PV interneurons become recruited into stimulus-specific ensembles and provide more selective inhibition as the network becomes better at discriminating behaviorally relevant stimuli.

  13. Closed head injury and perceptual processing in dual-task situations.

    PubMed

    Hein, G; Schubert, T; von Cramon, D Y

    2005-01-01

    Using a classical psychological refractory period (PRP) paradigm we investigated whether increased interference between dual-task input processes is one possible source of dual-task deficits in patients with closed-head injury (CHI). Patients and age-matched controls were asked to give speeded motor reactions to an auditory and a visual stimulus. The perceptual difficulty of the visual stimulus was manipulated by varying its intensity. The results of Experiment 1 showed that CHI patients suffer from increased interference between dual-task input processes, which is related to the salience of the visual stimulus. A second experiment indicated that this input interference may be specific to brain damage following CHI. It is not evident in other groups of neurological patients like Parkinson's disease patients. We conclude that the non-interfering processing of input stages in dual-tasks requires cognitive control. A decline in the control of input processes should be considered as one source of dual-task deficits in CHI patients.

  14. Eye movements and the span of the effective stimulus in visual search.

    PubMed

    Bertera, J H; Rayner, K

    2000-04-01

    The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.

  15. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  16. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight.

    PubMed

    Anders, Silke; Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-11-01

    Affective neuroscience has been strongly influenced by the view that a 'feeling' is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients' response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients' phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity.

  17. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight

    PubMed Central

    Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-01-01

    Affective neuroscience has been strongly influenced by the view that a ‘feeling’ is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients’ response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients’ phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity. PMID:19767414

  18. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  19. V1 projection zone signals in human macular degeneration depend on task, not stimulus.

    PubMed

    Masuda, Yoichiro; Dumoulin, Serge O; Nakadomari, Satoshi; Wandell, Brian A

    2008-11-01

    We used functional magnetic resonance imaging to assess abnormal cortical signals in humans with juvenile macular degeneration (JMD). These signals have been interpreted as indicating large-scale cortical reorganization. Subjects viewed a stimulus passively or performed a task; the task was either related or unrelated to the stimulus. During passive viewing, or while performing tasks unrelated to the stimulus, there were large unresponsive V1 regions. These regions included the foveal projection zone, and we refer to them as the lesion projection zone (LPZ). In 3 JMD subjects, we observed highly significant responses in the LPZ while they performed stimulus-related judgments. In control subjects, where we presented the stimulus only within the peripheral visual field, there was no V1 response in the foveal projection zone in any condition. The difference between JMD and control responses can be explained by hypotheses that have very different implications for V1 reorganization. In controls retinal afferents carry signals indicating the presence of a uniform (zero-contrast) region of the visual field. Deletion of retinal input may 1) spur the formation of new cortical pathways that carry task-dependent signals (reorganization), or 2) unmask preexisting task-dependent cortical signals that ordinarily are suppressed by the deleted signals (no reorganization).

  20. V1 Projection Zone Signals in Human Macular Degeneration Depend on Task, not Stimulus

    PubMed Central

    Dumoulin, Serge O.; Nakadomari, Satoshi; Wandell, Brian A.

    2008-01-01

    We used functional magnetic resonance imaging to assess abnormal cortical signals in humans with juvenile macular degeneration (JMD). These signals have been interpreted as indicating large-scale cortical reorganization. Subjects viewed a stimulus passively or performed a task; the task was either related or unrelated to the stimulus. During passive viewing, or while performing tasks unrelated to the stimulus, there were large unresponsive V1 regions. These regions included the foveal projection zone, and we refer to them as the lesion projection zone (LPZ). In 3 JMD subjects, we observed highly significant responses in the LPZ while they performed stimulus-related judgments. In control subjects, where we presented the stimulus only within the peripheral visual field, there was no V1 response in the foveal projection zone in any condition. The difference between JMD and control responses can be explained by hypotheses that have very different implications for V1 reorganization. In controls retinal afferents carry signals indicating the presence of a uniform (zero-contrast) region of the visual field. Deletion of retinal input may 1) spur the formation of new cortical pathways that carry task-dependent signals (reorganization), or 2) unmask preexisting task-dependent cortical signals that ordinarily are suppressed by the deleted signals (no reorganization). PMID:18250083

  1. Near-field visual acuity of pigeons: effects of head location and stimulus luminance.

    PubMed

    Hodos, W; Leibowitz, R W; Bonbright, J C

    1976-03-01

    Two pigeons were trained to discriminate a grating stimulus from a blank stimulus of equivalent luminance in a three-key chamber. The stimuli and blanks were presented behind a transparent center key. The procedure was a conditional discrimination in which pecks on the left key were reinforced if the blank had been present behind the center key and pecks on the right key were reinforced if the grating had been present behind the center key. The spatial frequency of the stimuli was varied in each session from four to 29.5 lines per millimeter in accordance with a variation of the method of constant stimuli. The number of lines per millimeter that the subjects could discriminate at threshold was determined from psychometric functions. Data were collected at five values of stimulus luminance ranging from--0.07 to 3.29 log cd/m2. The distance from the stimulus to the anterior nodal point of the eye, which was determined from measurements taken from high-speed motion-picture photographs of three additional pigeons and published intraocular measurements, was 62.0 mm. This distance and the grating detection thresholds were used to calculate the visual acuity of the birds at each level of luminance. Acuity improved with increasing luminance to a peak value of 0.52, which corresponds to a visual angle of 1.92 min, at a luminance of 2.33 log cd/m2. Further increase in luminance produced a small decline in acuity.

  2. Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands

    PubMed Central

    Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.

    2013-01-01

    The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203

  3. Extinction and anti-extinction: the "attentional waiting" hypothesis.

    PubMed

    Watling, Rosamond; Danckert, James; Linnell, Karina J; Cocchini, Gianna

    2013-03-01

    Patients with visual extinction have difficulty detecting a single contralesional stimulus when a second stimulus is simultaneously presented on the ipsilesional side. The rarely reported phenomenon of visual anti-extinction describes the opposite behavior, in which patients show greater difficulty in reporting a stimulus presented in isolation than they do in reporting 2 simultaneously presented stimuli. S. J. Goodrich and R. Ward (1997, Anti-extinction following unilateral parietal damage, Cognitive Neuropsychology, Vol. 14, pp. 595-612) suggested that visual anti-extinction is the result of a task-specific mechanism in which processing of the ipsilesional stimulus facilitates responses to the contralesional stimulus; in contrast, G. W. Humphreys, M. J. Riddoch, G. Nys, and D. Heinke (2002, Transient binding by time: Neuropsychological evidence from anti-extinction, Cognitive Neuropsychology, Vol. 19, pp. 361-380) suggested that temporal binding groups contralesional and ipsilesional stimuli together at brief exposure durations. We investigated extinction and anti-extinction phenomena in 3 brain-damaged patients using an extinction paradigm in which the stimulus exposure duration was systematically manipulated. Two patients showed both extinction and anti-extinction depending on the exposure duration of stimuli. Data confirmed the crucial role of duration in modulating the effect of extinction and anti-extinction. However, contrary to Humphreys and colleagues' (2002) single case, our patients showed extinction for short and anti-extinction for long exposure durations, suggesting that different mechanisms might underlie our patients' pattern of data. We discuss a novel "attentional waiting" hypothesis, which proposes that anti-extinction may be observed in patients showing extinction if the exposure duration of stimuli is increased. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Cortical networks involved in visual awareness independent of visual attention.

    PubMed

    Webb, Taylor W; Igelström, Kajsa M; Schurger, Aaron; Graziano, Michael S A

    2016-11-29

    It is now well established that visual attention, as measured with standard spatial attention tasks, and visual awareness, as measured by report, can be dissociated. It is possible to attend to a stimulus with no reported awareness of the stimulus. We used a behavioral paradigm in which people were aware of a stimulus in one condition and unaware of it in another condition, but the stimulus drew a similar amount of spatial attention in both conditions. The paradigm allowed us to test for brain regions active in association with awareness independent of level of attention. Participants performed the task in an MRI scanner. We looked for brain regions that were more active in the aware than the unaware trials. The largest cluster of activity was obtained in the temporoparietal junction (TPJ) bilaterally. Local independent component analysis (ICA) revealed that this activity contained three distinct, but overlapping, components: a bilateral, anterior component; a left dorsal component; and a right dorsal component. These components had brain-wide functional connectivity that partially overlapped the ventral attention network and the frontoparietal control network. In contrast, no significant activity in association with awareness was found in the banks of the intraparietal sulcus, a region connected to the dorsal attention network and traditionally associated with attention control. These results show the importance of separating awareness and attention when testing for cortical substrates. They are also consistent with a recent proposal that awareness is associated with ventral attention areas, especially in the TPJ.

  5. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions.

    PubMed

    van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean

    2017-04-15

    A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    PubMed

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  9. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  10. Contrast invariance of orientation tuning in the lateral geniculate nucleus of the feline visual system.

    PubMed

    Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R

    2015-09-01

    Responses of most neurons in the primary visual cortex of mammals are markedly selective for stimulus orientation and their orientation tuning does not vary with changes in stimulus contrast. The basis of such contrast invariance of orientation tuning has been shown to be the higher variability in the response for low-contrast stimuli. Neurons in the lateral geniculate nucleus (LGN), which provides the major visual input to the cortex, have also been shown to have higher variability in their response to low-contrast stimuli. Parallel studies have also long established mild degrees of orientation selectivity in LGN and retinal cells. In our study, we show that contrast invariance of orientation tuning is already present in the LGN. In addition, we show that the variability of spike responses of LGN neurons increases at lower stimulus contrasts, especially for non-preferred orientations. We suggest that such contrast- and orientation-sensitive variability not only explains the contrast invariance observed in the LGN but can also underlie the contrast-invariant orientation tuning seen at the level of the primary visual cortex. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Mapping and characterization of positive and negative BOLD responses to visual stimulation in multiple brain regions at 7T.

    PubMed

    Jorge, João; Figueiredo, Patrícia; Gruetter, Rolf; van der Zwaag, Wietske

    2018-06-01

    External stimuli and tasks often elicit negative BOLD responses in various brain regions, and growing experimental evidence supports that these phenomena are functionally meaningful. In this work, the high sensitivity available at 7T was explored to map and characterize both positive (PBRs) and negative BOLD responses (NBRs) to visual checkerboard stimulation, occurring in various brain regions within and beyond the visual cortex. Recently-proposed accelerated fMRI techniques were employed for data acquisition, and procedures for exclusion of large draining vein contributions, together with ICA-assisted denoising, were included in the analysis to improve response estimation. Besides the visual cortex, significant PBRs were found in the lateral geniculate nucleus and superior colliculus, as well as the pre-central sulcus; in these regions, response durations increased monotonically with stimulus duration, in tight covariation with the visual PBR duration. Significant NBRs were found in the visual cortex, auditory cortex, default-mode network (DMN) and superior parietal lobule; NBR durations also tended to increase with stimulus duration, but were significantly less sustained than the visual PBR, especially for the DMN and superior parietal lobule. Responses in visual and auditory cortex were further studied for checkerboard contrast dependence, and their amplitudes were found to increase monotonically with contrast, linearly correlated with the visual PBR amplitude. Overall, these findings suggest the presence of dynamic neuronal interactions across multiple brain regions, sensitive to stimulus intensity and duration, and demonstrate the richness of information obtainable when jointly mapping positive and negative BOLD responses at a whole-brain scale, with ultra-high field fMRI. © 2018 Wiley Periodicals, Inc.

  12. A neural network model of ventriloquism effect and aftereffect.

    PubMed

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  13. Impact of stimulus uncanniness on speeded response

    PubMed Central

    Takahashi, Kohske; Fukuda, Haruaki; Samejima, Kazuyuki; Watanabe, Katsumi; Ueda, Kazuhiro

    2015-01-01

    In the uncanny valley phenomenon, the causes of the feeling of uncanniness as well as the impact of the uncanniness on behavioral performances still remain open. The present study investigated the behavioral effects of stimulus uncanniness, particularly with respect to speeded response. Pictures of fish were used as visual stimuli. Participants engaged in direction discrimination, spatial cueing, and dot-probe tasks. The results showed that pictures rated as strongly uncanny delayed speeded response in the discrimination of the direction of the fish. In the cueing experiment, where a fish served as a task-irrelevant and unpredictable cue for a peripheral target, we again observed that the detection of a target was slowed when the cue was an uncanny fish. Conversely, the dot-probe task suggested that uncanny fish, unlike threatening stimulus, did not capture visual spatial attention. These results suggested that stimulus uncanniness resulted in the delayed response, and importantly this modulation was not mediated by the feelings of threat. PMID:26052297

  14. A neural basis for the spatial suppression of visual motion perception

    PubMed Central

    Liu, Liu D; Haefner, Ralf M; Pack, Christopher C

    2016-01-01

    In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283

  15. Coactivation of response initiation processes with redundant signals.

    PubMed

    Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N

    2018-05-14

    During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Early correlates of visual awareness following orientation and colour rivalry.

    PubMed

    Veser, Sandra; O'Shea, Robert P; Schröger, Erich; Trujillo-Barreto, Nelson J; Roeber, Urte

    2008-10-01

    Binocular rivalry occurs when dissimilar images are presented to corresponding retinal regions of the two eyes: visibility alternates irregularly between the two images, interspersed by brief transitions when parts of both may be visible. We measured event-related potentials (ERPs) following binocular rivalry by changing the stimulus viewed by one eye to be identical to that in the other eye, eliciting binocular fusion. Because of the rivalry, observers either saw the change, when it happened to the visible stimulus, or did not see the change, when it happened to the invisible stimulus. The earliest ERP differences between visible and invisible changes occurred after about 100 ms (P1) when the rivalry was between stimuli differing in orientation, and after about 200 ms (N1) when the rivalry was between stimuli differing in colour. These differences originated from ventro-lateral temporal and prefrontal areas. We conclude that the rivalling stimulus property influences the timing of modulation of correlates of visual awareness in a property-independent cortical network.

  17. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  18. Fluoxetine Does Not Enhance Visual Perceptual Learning and Triazolam Specifically Impairs Learning Transfer

    PubMed Central

    Lagas, Alice K.; Black, Joanna M.; Byblow, Winston D.; Fleming, Melanie K.; Goodman, Lucy K.; Kydd, Robert R.; Russell, Bruce R.; Stinear, Cathy M.; Thompson, Benjamin

    2016-01-01

    The selective serotonin reuptake inhibitor fluoxetine significantly enhances adult visual cortex plasticity within the rat. This effect is related to decreased gamma-aminobutyric acid (GABA) mediated inhibition and identifies fluoxetine as a potential agent for enhancing plasticity in the adult human brain. We tested the hypothesis that fluoxetine would enhance visual perceptual learning of a motion direction discrimination (MDD) task in humans. We also investigated (1) the effect of fluoxetine on visual and motor cortex excitability and (2) the impact of increased GABA mediated inhibition following a single dose of triazolam on post-training MDD task performance. Within a double blind, placebo controlled design, 20 healthy adult participants completed a 19-day course of fluoxetine (n = 10, 20 mg per day) or placebo (n = 10). Participants were trained on the MDD task over the final 5 days of fluoxetine administration. Accuracy for the trained MDD stimulus and an untrained MDD stimulus configuration was assessed before and after training, after triazolam and 1 week after triazolam. Motor and visual cortex excitability were measured using transcranial magnetic stimulation. Fluoxetine did not enhance the magnitude or rate of perceptual learning and full transfer of learning to the untrained stimulus was observed for both groups. After training was complete, trazolam had no effect on trained task performance but significantly impaired untrained task performance. No consistent effects of fluoxetine on cortical excitability were observed. The results do not support the hypothesis that fluoxetine can enhance learning in humans. However, the specific effect of triazolam on MDD task performance for the untrained stimulus suggests that learning and learning transfer rely on dissociable neural mechanisms. PMID:27807412

  19. Fear of falling and postural reactivity in patients with glaucoma.

    PubMed

    Daga, Fábio B; Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; Medeiros, Felipe A

    2017-01-01

    To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment.

  20. Fear of falling and postural reactivity in patients with glaucoma

    PubMed Central

    Daga, Fábio B.; Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; Medeiros, Felipe A.

    2017-01-01

    Purpose To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. Methods This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Results Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). Conclusion In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment. PMID:29211742

  1. How Does Awareness Modulate Goal-Directed and Stimulus-Driven Shifts of Attention Triggered by Value Learning?

    PubMed Central

    Bourgeois, Alexia; Neveu, Rémi; Vuilleumier, Patrik

    2016-01-01

    In order to behave adaptively, attention can be directed in space either voluntarily (i.e., endogenously) according to strategic goals, or involuntarily (i.e., exogenously) through reflexive capture by salient or novel events. The emotional or motivational value of stimuli can also strongly influence attentional orienting. However, little is known about how reward-related effects compete or interact with endogenous and exogenous attention mechanisms, particularly outside of awareness. Here we developed a visual search paradigm to study subliminal value-based attentional orienting. We systematically manipulated goal-directed or stimulus-driven attentional orienting and examined whether an irrelevant, but previously rewarded stimulus could compete with both types of spatial attention during search. Critically, reward was learned without conscious awareness in a preceding phase where one among several visual symbols was consistently paired with a subliminal monetary reinforcement cue. Our results demonstrated that symbols previously associated with a monetary reward received higher attentional priority in the subsequent visual search task, even though these stimuli and reward were no longer task-relevant, and despite reward being unconsciously acquired. Thus, motivational processes operating independent of conscious awareness may provide powerful influences on mechanisms of attentional selection, which could mitigate both stimulus-driven and goal-directed shifts of attention. PMID:27483371

  2. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting.

    PubMed

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  3. Preparatory attention in visual cortex.

    PubMed

    Battistoni, Elisa; Stein, Timo; Peelen, Marius V

    2017-05-01

    Top-down attention is the mechanism that allows us to selectively process goal-relevant aspects of a scene while ignoring irrelevant aspects. A large body of research has characterized the effects of attention on neural activity evoked by a visual stimulus. However, attention also includes a preparatory phase before stimulus onset in which the attended dimension is internally represented. Here, we review neurophysiological, functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and transcranial magnetic stimulation (TMS) studies investigating the neural basis of preparatory attention, both when attention is directed to a location in space and when it is directed to nonspatial stimulus attributes (content-based attention) ranging from low-level features to object categories. Results show that both spatial and content-based attention lead to increased baseline activity in neural populations that selectively code for the attended attribute. TMS studies provide evidence that this preparatory activity is causally related to subsequent attentional selection and behavioral performance. Attention thus acts by preactivating selective neurons in the visual cortex before stimulus onset. This appears to be a general mechanism that can operate on multiple levels of representation. We discuss the functional relevance of this mechanism, its limitations, and its relation to working memory, imagery, and expectation. We conclude by outlining open questions and future directions. © 2017 New York Academy of Sciences.

  4. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  5. Oculomotor inhibition covaries with conscious detection

    PubMed Central

    Rolfs, Martin

    2016-01-01

    Saccadic eye movements occur frequently even during attempted fixation, but they halt momentarily when a new stimulus appears. Here, we demonstrate that this rapid, involuntary “oculomotor freezing” reflex is yoked to fluctuations in explicit visual perception. Human observers reported the presence or absence of a brief visual stimulus while we recorded microsaccades, small spontaneous eye movements. We found that microsaccades were reflexively inhibited if and only if the observer reported seeing the stimulus, even when none was present. By applying a novel Bayesian classification technique to patterns of microsaccades on individual trials, we were able to decode the reported state of perception more accurately than the state of the stimulus (present vs. absent). Moreover, explicit perceptual sensitivity and the oculomotor reflex were both susceptible to orientation-specific adaptation. The adaptation effects suggest that the freezing reflex is mediated by signals processed in the visual cortex before reaching oculomotor control centers rather than relying on a direct subcortical route, as some previous research has suggested. We conclude that the reflexive inhibition of microsaccades immediately and inadvertently reveals when the observer becomes aware of a change in the environment. By providing an objective measure of conscious perceptual detection that does not require explicit reports, this finding opens doors to clinical applications and further investigations of perceptual awareness. PMID:27385794

  6. More Than the Verbal Stimulus Matters: Visual Attention in Language Assessment for People With Aphasia Using Multiple-Choice Image Displays

    PubMed Central

    Ivanova, Maria V.; Hallowell, Brooke

    2017-01-01

    Purpose Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA. Method Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images. Results PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition. Conclusion When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics. PMID:28520866

  7. Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding.

    PubMed

    Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M

    2017-11-08

    When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.

  8. Rapid Simultaneous Enhancement of Visual Sensitivity and Perceived Contrast during Saccade Preparation

    PubMed Central

    Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086

  9. Stimulus-related activity during conditional associations in monkey perirhinal cortex neurons depends on upcoming reward outcome.

    PubMed

    Ohyama, Kaoru; Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Shidara, Munetaka; Sato, Chikara

    2012-11-28

    Acquiring the significance of events based on reward-related information is critical for animals to survive and to conduct social activities. The importance of the perirhinal cortex for reward-related information processing has been suggested. To examine whether or not neurons in this cortex represent reward information flexibly when a visual stimulus indicates either a rewarded or unrewarded outcome, neuronal activity in the macaque perirhinal cortex was examined using a conditional-association cued-reward task. The task design allowed us to study how the neuronal responses depended on the animal's prediction of whether it would or would not be rewarded. Two visual stimuli, a color stimulus as Cue1 followed by a pattern stimulus as Cue2, were sequentially presented. Each pattern stimulus was conditionally associated with both rewarded and unrewarded outcomes depending on the preceding color stimulus. We found an activity depending upon the two reward conditions during Cue2, i.e., pattern stimulus presentation. The response appeared after the response dependent upon the image identity of Cue2. The response delineating a specific cue sequence also appeared between the responses dependent upon the identity of Cue2 and reward conditions. Thus, when Cue1 sets the context for whether or not Cue2 indicates a reward, this region represents the meaning of Cue2, i.e., the reward conditions, independent of the identity of Cue2. These results suggest that neurons in the perirhinal cortex do more than associate a single stimulus with a reward to achieve flexible representations of reward information.

  10. Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.

    PubMed

    Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu

    2015-09-30

    Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.

  11. Dynamic scanpaths: eye movement analysis methods

    NASA Astrophysics Data System (ADS)

    Blackmon, Theodore T.; Ho, Yeuk F.; Chernyak, Dimitri A.; Azzariti, Michela; Stark, Lawrence W.

    1999-05-01

    An eye movements sequence, or scanpath, during viewing of a stationary stimulus has been described as a set of fixations onto regions-of-interest, ROIs, and the saccades or transitions between them. Such scanpaths have high similarity for the same subject and stimulus both in the spatial loci of the ROIs and their sequence; scanpaths also take place during recollection of a previously viewed stimulus, suggesting that they play a similar role in visual memory and recall.

  12. Methods for Dichoptic Stimulus Presentation in Functional Magnetic Resonance Imaging - A Review

    PubMed Central

    Choubey, Bhaskar; Jurcoane, Alina; Muckli, Lars; Sireteanu, Ruxandra

    2009-01-01

    Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles. PMID:19526076

  13. Neural and cognitive face-selective markers: An integrative review.

    PubMed

    Yovel, Galit

    2016-03-01

    Faces elicit robust and selective neural responses in the primate brain. These neural responses have been investigated with functional MRI and EEG in numerous studies, which have reported face-selective activations in the occipital-temporal cortex and an electrophysiological face-selective response that peaks 170 ms after stimulus onset at occipital-temporal sites. Evidence for face-selective processes has also been consistently reported in cognitive studies, which investigated the face inversion effect, the composite face effect and the left visual field (LVF) superiority. These cognitive effects indicate that the perceptual representation that we generate for faces differs from the representation that is generated for inverted faces or non-face objects. In this review, I will show that the fMRI and ERP face-selective responses are strongly associated with these three well-established behavioral face-selective measures. I will further review studies that examined the relationship between fMRI and EEG face-selective measures suggesting that they are strongly linked. Taken together these studies imply that a holistic representation of a face is generated at 170 ms after stimulus onset over the right hemisphere. These findings, which reveal a strong link between the various and complementary cognitive and neural measures of face processing, allow to characterize where, when and how faces are represented during the first 200 ms of face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  15. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  16. Release of inattentional blindness by high working memory load: elucidating the relationship between working memory and selective attention.

    PubMed

    de Fockert, Jan W; Bremner, Andrew J

    2011-12-01

    An unexpected stimulus often remains unnoticed if attention is focused elsewhere. This inattentional blindness has been shown to be increased under conditions of high memory load. Here we show that increasing working memory load can also have the opposite effect of reducing inattentional blindness (i.e., improving stimulus detection) if stimulus detection is competing for attention with a concurrent visual task. Participants were required to judge which of two lines was the longer while holding in working memory either one digit (low load) or six digits (high load). An unexpected visual stimulus was presented once alongside the line judgment task. Detection of the unexpected stimulus was significantly improved under conditions of higher working memory load. This improvement in performance prompts the striking conclusion that an effect of cognitive load is to increase attentional spread, thereby enhancing our ability to detect perceptual stimuli to which we would normally be inattentionally blind under less taxing cognitive conditions. We discuss the implications of these findings for our understanding of the relationship between working memory and selective attention. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Temporal expectancy in the context of a theory of visual attention.

    PubMed

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-10-19

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue-stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s(-1)) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations.

  18. Global inhibition and stimulus competition in the owl optic tectum

    PubMed Central

    Mysore, Shreesh P.; Asadollahi, Ali; Knudsen, Eric I.

    2010-01-01

    Stimulus selection for gaze and spatial attention involves competition among stimuli across sensory modalities and across all of space. We demonstrate that such cross-modal, global competition takes place in the intermediate and deep layers of the optic tectum, a structure known to be involved in gaze control and attention. A variety of either visual or auditory stimuli located anywhere outside of a neuron's receptive field (RF) were shown to suppress or completely eliminate responses to a visual stimulus located inside the RF in nitrous oxide sedated owls. The essential mechanism underlying this stimulus competition is global, divisive inhibition. Unlike the effect of the classical inhibitory surround, which decreases with distance from the RF center and shapes neuronal responses to individual stimuli, global inhibition acts across the entirety of space and modulates responses primarily in the context of multiple stimuli. Whereas the source of this global inhibition is as yet unknown, our data indicate that different networks mediate the classical surround and global inhibition. We hypothesize that this global, cross-modal inhibition, which acts automatically in a bottom-up fashion even in sedated animals, is critical to the creation of a map of stimulus salience in the optic tectum. PMID:20130182

  19. The visual perception of distance ratios outdoors.

    PubMed

    Norman, J Farley; Adkins, Olivia C; Dowell, Catherine J; Shain, Lindsey M; Hoyng, Stevie C; Kinnard, Jonathan D

    2017-05-01

    We conducted an experiment to evaluate the ability of 32 younger and older adults to visually perceive distances in an outdoor setting. On any given trial, the observers viewed 2 environmental distances and were required to estimate the distance ratio-the length of the (usually) larger distance relative to that of the shorter. The stimulus distance ratios ranged from 1.0 (the stimulus distances were identical) to 8.0 (1 distance interval was 8.0 times longer than the other). The stimulus distances were presented within a 26 m × 60 m portion of a grassy field. The observers were able to reliably estimate the stimulus distance ratios: The overall Pearson r correlation coefficient relating the judged and actual distance ratios was 0.762. Fifty-eight percent of the variance in the observers' perceived distance ratios could thus be accounted for by variations in the actual stimulus ratios. About half of the observers significantly underestimated the distance ratios, while the judgments of the remainder were essentially accurate. Significant modulatory effects of sex and age occurred, such that the male observers' judgments were the most precise, while those of the older males were the most accurate.

  20. Compound Stimulus Extinction Reduces Spontaneous Recovery in Humans

    ERIC Educational Resources Information Center

    Coelho, Cesar A. O.; Dunsmoor, Joseph E.; Phelps, Elizabeth A.

    2015-01-01

    Fear-related behaviors are prone to relapse following extinction. We tested in humans a compound extinction design ("deepened extinction") shown in animal studies to reduce post-extinction fear recovery. Adult subjects underwent fear conditioning to a visual and an auditory conditioned stimulus (CSA and CSB, respectively) separately…

  1. Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite

    DTIC Science & Technology

    2016-09-01

    aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system

  2. Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity

    PubMed Central

    Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.

    2013-01-01

    Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009

  3. Task-dependent V1 responses in human retinitis pigmentosa.

    PubMed

    Masuda, Yoichiro; Horiguchi, Hiroshi; Dumoulin, Serge O; Furuta, Ayumu; Miyauchi, Satoru; Nakadomari, Satoshi; Wandell, Brian A

    2010-10-01

    During measurement with functional MRI (fMRI) during passive viewing, subjects with macular degeneration (MD) have a large unresponsive lesion projection zone (LPZ) in V1. fMRI responses can be evoked from the LPZ when subjects engage in a stimulus-related task. The authors report fMRI measurements on a different class of subjects, those with retinitis pigmentosa (RP), who have intact foveal vision but peripheral visual field loss. The authors measured three RP subjects and two control subjects. fMRI was performed while the subjects viewed drifting contrast pattern stimuli. The subjects passively viewed the stimuli or performed a stimulus-related task. During passive viewing, the BOLD response in the posterior calcarine cortex of all RP subjects was in phase with the stimulus. A bordering, anterior LPZ could be identified by responses that were in opposite phase to the stimulus. When the RP subjects made stimulus-related judgments, however, the LPZ responses changed: the responses modulated in phase with the stimulus and task. In control subjects, the responses in a simulated V1 LPZ were unchanged between the passive and the stimulus-related judgment conditions. Task-dependent LPZ responses are present in RP subjects, similar to responses measured in MD subjects. The results are consistent with the hypothesis that deleting the retinal input to the LPZ unmasks preexisting extrastriate feedback signals that are present across V1. The authors discuss the implications of this hypothesis for visual therapy designed to replace the missing V1 LPZ inputs and to restore vision.

  4. Cortical sources of visual evoked potentials during consciousness of executive processes.

    PubMed

    Babiloni, Claudio; Vecchio, Fabrizio; Iacoboni, Marco; Buffo, Paola; Eusebi, Fabrizio; Rossini, Paolo Maria

    2009-03-01

    What is the timing of cortical activation related to consciousness of visuo-spatial executive functions? Electroencephalographic data (128 channels) were recorded in 13 adults. Cue stimulus briefly appeared on right or left (equal probability) monitor side for a period, inducing about 50% of recognitions. It was then masked and followed (2 s) by a central visual go stimulus. Left (right) mouse button had to be clicked after right (left) cue stimulus. This "inverted" response indexed executive processes. Afterward, subjects said "seen" if they had detected the cue stimulus or "not seen" when it was missed. Sources of event-related potentials (ERPs) were estimated by LORETA software. The inverted responses were about 95% in seen trials and about 60% in not seen trials. Cue stimulus evoked frontal-parietooccipital potentials, having the same peak latencies in the seen and not seen data. Maximal difference in amplitude of the seen and not seen ERPs was detected at about +300-ms post-stimulus (P3). P3 sources were higher in amplitude in the seen than not seen trials in dorsolateral prefrontal, premotor and parietooccipital areas. This was true in dorsolateral prefrontal and premotor cortex even when percentage of the inverted responses and reaction time were paired in the seen and not seen trials. These results suggest that, in normal subjects, the primary consciousness enhances the efficacy of visuo-spatial executive processes and is sub-served by a late (100- to 400-ms post-stimulus) enhancement of the neural synchronization in frontal areas.

  5. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  6. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    NASA Astrophysics Data System (ADS)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  7. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  8. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli

    PubMed Central

    Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming

    2016-01-01

    Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573

  9. A theta rhythm in macaque visual cortex and its attentional modulation

    PubMed Central

    Spyropoulos, Georgios; Fries, Pascal

    2018-01-01

    Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632

  10. Primary visual response (M100) delays in adolescents with FASD as measured with MEG.

    PubMed

    Coffman, Brian A; Kodituwakku, Piyadasa; Kodituwakku, Elizabeth L; Romero, Lucinda; Sharadamma, Nirupama Muniswamy; Stone, David; Stephen, Julia M

    2013-11-01

    Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. Copyright © 2012 Wiley Periodicals, Inc.

  11. Top-Down Control of Visual Alpha Oscillations: Sources of Control Signals and Their Mechanisms of Action

    PubMed Central

    Wang, Chao; Rajagovindan, Rajasimhan; Han, Sahng-Min; Ding, Mingzhou

    2016-01-01

    Alpha oscillations (8–12 Hz) are thought to inversely correlate with cortical excitability. Goal-oriented modulation of alpha has been studied extensively. In visual spatial attention, alpha over the region of visual cortex corresponding to the attended location decreases, signifying increased excitability to facilitate the processing of impending stimuli. In contrast, in retention of verbal working memory, alpha over visual cortex increases, signifying decreased excitability to gate out stimulus input to protect the information held online from sensory interference. According to the prevailing model, this goal-oriented biasing of sensory cortex is effected by top-down control signals from frontal and parietal cortices. The present study tests and substantiates this hypothesis by (a) identifying the signals that mediate the top-down biasing influence, (b) examining whether the cortical areas issuing these signals are task-specific or task-independent, and (c) establishing the possible mechanism of the biasing action. High-density human EEG data were recorded in two experimental paradigms: a trial-by-trial cued visual spatial attention task and a modified Sternberg working memory task. Applying Granger causality to both sensor-level and source-level data we report the following findings. In covert visual spatial attention, the regions exerting top-down control over visual activity are lateralized to the right hemisphere, with the dipoles located at the right frontal eye field (FEF) and the right inferior frontal gyrus (IFG) being the main sources of top-down influences. During retention of verbal working memory, the regions exerting top-down control over visual activity are lateralized to the left hemisphere, with the dipoles located at the left middle frontal gyrus (MFG) being the main source of top-down influences. In both experiments, top-down influences are mediated by alpha oscillations, and the biasing effect is likely achieved via an inhibition-disinhibition mechanism. PMID:26834601

  12. Visual-Spatial Orienting in Autism.

    ERIC Educational Resources Information Center

    Wainwright, J. Ann; Bryson, Susan E.

    1996-01-01

    Visual-spatial orienting in 10 high-functioning adults with autism was examined. Compared to controls, subjects responded faster to central than to lateral stimuli, and showed a left visual field advantage for stimulus detection only when laterally presented. Abnormalities in attention shifting and coordination of attentional and motor systems are…

  13. Perceptual Load Alters Visual Excitability

    ERIC Educational Resources Information Center

    Carmel, David; Thorne, Jeremy D.; Rees, Geraint; Lavie, Nilli

    2011-01-01

    Increasing perceptual load reduces the processing of visual stimuli outside the focus of attention, but the mechanism underlying these effects remains unclear. Here we tested an account attributing the effects of perceptual load to modulations of visual cortex excitability. In contrast to stimulus competition accounts, which propose that load…

  14. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.

  15. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  16. The Effects of Visual Stimuli on the Spoken Narrative Performance of School-Age African American Children

    ERIC Educational Resources Information Center

    Mills, Monique T.

    2015-01-01

    Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…

  17. Visual speech discrimination and identification of natural and synthetic consonant stimuli

    PubMed Central

    Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.

    2015-01-01

    From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249

  18. Stimulus Characteristics Affect Humor Processing in Individuals with Asperger Syndrome

    ERIC Educational Resources Information Center

    Samson, Andrea C.; Hegenloh, Michael

    2010-01-01

    The present paper aims to investigate whether individuals with Asperger syndrome (AS) show global humor processing deficits or whether humor comprehension and appreciation depends on stimulus characteristics. Non-verbal visual puns, semantic and Theory of Mind cartoons were rated on comprehension, funniness and the punchlines were explained. AS…

  19. Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory.

    PubMed

    Samaha, Jason; Postle, Bradley R

    2017-11-29

    Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).

  20. Changes in brain activation induced by visual stimulus during and after propofol conscious sedation: a functional MRI study.

    PubMed

    Shinohe, Yutaka; Higuchi, Satomi; Sasaki, Makoto; Sato, Masahito; Noda, Mamoru; Joh, Shigeharu; Satoh, Kenichi

    2016-12-07

    Conscious sedation with propofol sometimes causes amnesia while keeping the patient awake. However, it remains unknown how propofol compromises the memory function. Therefore, we investigated the changes in brain activation induced by visual stimulation during and after conscious sedation with propofol using serial functional MRI. Healthy volunteers received a target-controlled infusion of propofol, and underwent functional MRI scans with a block-design paradigm of visual stimulus before, during, and after conscious sedation. Random-effect model analyses were performed using Statistical Parametric Mapping software. Among the areas showing significant activation in response to the visual stimulus, the visual cortex and fusiform gyrus were significantly suppressed in the sedation session and tended to recover in the early-recovery session of ∼20 min (P<0.001, uncorrected). In contrast, decreased activations of the hippocampus, thalamus, inferior frontal cortex (ventrolateral prefrontal cortex), and cerebellum were maintained during the sedation and early-recovery sessions (P<0.001, uncorrected) and were recovered in the late-recovery session of ∼40 min. Temporal changes in the signals from these areas varied in a manner comparable to that described by the random-effect model analysis (P<0.05, corrected). In conclusion, conscious sedation with propofol may cause prolonged suppression of the activation of memory-related structures, such as the hippocampus, during the early-recovery period, which may lead to transient amnesia.

  1. Cogito ergo video: Task-relevant information is involuntarily boosted into awareness.

    PubMed

    Gayet, Surya; Brascamp, Jan W; Van der Stigchel, Stefan; Paffen, Chris L E

    2015-01-01

    Only part of the visual information that impinges on our retinae reaches visual awareness. In a series of three experiments, we investigated how the task relevance of incoming visual information affects its access to visual awareness. On each trial, participants were instructed to memorize one of two presented hues, drawn from different color categories (e.g., red and green), for later recall. During the retention interval, participants were presented with a differently colored grating in each eye such as to elicit binocular rivalry. A grating matched either the task-relevant (memorized) color category or the task-irrelevant (nonmemorized) color category. We found that the rivalrous stimulus that matched the task-relevant color category tended to dominate awareness over the rivalrous stimulus that matched the task-irrelevant color category. This effect of task relevance persisted when participants reported the orientation of the rivalrous stimuli, even though in this case color information was completely irrelevant for the task of reporting perceptual dominance during rivalry. When participants memorized the shape of a colored stimulus, however, its color category did not affect predominance of rivalrous stimuli during retention. Taken together, these results indicate that the selection of task-relevant information is under volitional control but that visual input that matches this information is boosted into awareness irrespective of whether this is useful for the observer.

  2. Context-Dependent Modulation of Functional Connectivity: Secondary Somatosensory Cortex to Prefrontal Cortex Connections in Two-Stimulus-Interval Discrimination Tasks

    PubMed Central

    Chow, Stephanie S.; Romo, Ranulfo; Brody, Carlos D.

    2010-01-01

    In a complex world, a sensory cue may prompt different actions in different contexts. A laboratory example of context-dependent sensory processing is the two-stimulus-interval discrimination task. In each trial, a first stimulus (f1) must be stored in short-term memory and later compared with a second stimulus (f2), for the animal to come to a binary decision. Prefrontal cortex (PFC) neurons need to interpret the f1 information in one way (perhaps with a positive weight) and the f2 information in an opposite way (perhaps with a negative weight), although they come from the very same secondary somatosensory cortex (S2) neurons; therefore, a functional sign inversion is required. This task thus provides a clear example of context-dependent processing. Here we develop a biologically plausible model of a context-dependent signal transformation of the stimulus encoding from S2 to PFC. To ground our model in experimental neurophysiology, we use neurophysiological data recorded by R. Romo’s laboratory from both cortical area S2 and PFC in monkeys performing the task. Our main goal is to use experimentally observed context-dependent modulations of firing rates in cortical area S2 as the basis for a model that achieves a context-dependent inversion of the sign of S2 to PFC connections. This is done without requiring any changes in connectivity (Salinas, 2004b). We (1) characterize the experimentally observed context-dependent firing rate modulation in area S2, (2) construct a model that results in the sign transformation, and (3) characterize the robustness and consequent biological plausibility of the model. PMID:19494146

  3. Graded Neuronal Modulations Related to Visual Spatial Attention.

    PubMed

    Mayo, J Patrick; Maunsell, John H R

    2016-05-11

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary "spotlight" of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused ("cued" vs "uncued"). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral performance and neuronal responses during neutral cueing were intermediate to those of the cued and uncued conditions. We found no signatures of a single mechanism of attention that switches between stimulus locations. Thus, attention-related changes in neuronal activity are largely hemisphere-specific and graded according to task demands. Copyright © 2016 the authors 0270-6474/16/365353-09$15.00/0.

  4. Graded Neuronal Modulations Related to Visual Spatial Attention

    PubMed Central

    Maunsell, John H. R.

    2016-01-01

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary “spotlight” of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. SIGNIFICANCE STATEMENT Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused (“cued” vs “uncued”). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral performance and neuronal responses during neutral cueing were intermediate to those of the cued and uncued conditions. We found no signatures of a single mechanism of attention that switches between stimulus locations. Thus, attention-related changes in neuronal activity are largely hemisphere-specific and graded according to task demands. PMID:27170131

  5. Evaluating the operations underlying multisensory integration in the cat superior colliculus.

    PubMed

    Stanford, Terrence R; Quessy, Stephan; Stein, Barry E

    2005-07-13

    It is well established that superior colliculus (SC) multisensory neurons integrate cues from different senses; however, the mechanisms responsible for producing multisensory responses are poorly understood. Previous studies have shown that spatially congruent cues from different modalities (e.g., auditory and visual) yield enhanced responses and that the greatest relative enhancements occur for combinations of the least effective modality-specific stimuli. Although these phenomena are well documented, little is known about the mechanisms that underlie them, because no study has systematically examined the operation that multisensory neurons perform on their modality-specific inputs. The goal of this study was to evaluate the computations that multisensory neurons perform in combining the influences of stimuli from two modalities. The extracellular activities of single neurons in the SC of the cat were recorded in response to visual, auditory, and bimodal visual-auditory stimulation. Each neuron was tested across a range of stimulus intensities and multisensory responses evaluated against the null hypothesis of simple summation of unisensory influences. We found that the multisensory response could be superadditive, additive, or subadditive but that the computation was strongly dictated by the efficacies of the modality-specific stimulus components. Superadditivity was most common within a restricted range of near-threshold stimulus efficacies, whereas for the majority of stimuli, response magnitudes were consistent with the linear summation of modality-specific influences. In addition to providing a constraint for developing models of multisensory integration, the relationship between response mode and stimulus efficacy emphasizes the importance of considering stimulus parameters when inducing or interpreting multisensory phenomena.

  6. Visual statistical learning is not reliably modulated by selective attention to isolated events

    PubMed Central

    Musz, Elizabeth; Weber, Matthew J.; Thompson-Schill, Sharon L.

    2014-01-01

    Recent studies of visual statistical learning (VSL) indicate that the visual system can automatically extract temporal and spatial relationships between objects. We report several attempts to replicate and extend earlier work (Turk-Browne et al., 2005) in which observers performed a cover task on one of two interleaved stimulus sets, resulting in learning of temporal relationships that occur in the attended stream, but not those present in the unattended stream. Across four experiments, we exposed observers to a similar or identical familiarization protocol, directing attention to one of two interleaved stimulus sets; afterward, we assessed VSL efficacy for both sets using either implicit response-time measures or explicit familiarity judgments. In line with prior work, we observe learning for the attended stimulus set. However, unlike previous reports, we also observe learning for the unattended stimulus set. When instructed to selectively attend to only one of the stimulus sets and ignore the other set, observers could extract temporal regularities for both sets. Our efforts to experimentally decrease this effect by changing the cover task (Experiment 1) or the complexity of the statistical regularities (Experiment 3) were unsuccessful. A fourth experiment using a different assessment of learning likewise failed to show an attentional effect. Simulations drawing random samples our first three experiments (n=64) confirm that the distribution of attentional effects in our sample closely approximates the null. We offer several potential explanations for our failure to replicate earlier findings, and discuss how our results suggest limiting conditions on the relevance of attention to VSL. PMID:25172196

  7. Stimulus Value Signals in Ventromedial PFC Reflect the Integration of Attribute Value Signals Computed in Fusiform Gyrus and Posterior Superior Temporal Gyrus

    PubMed Central

    Lim, Seung-Lark; O'Doherty, John P.

    2013-01-01

    We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision. PMID:23678116

  8. Modification of a prey catching response and the development of behavioral persistence in the fire-bellied toad (Bombina orientalis).

    PubMed

    Ramsay, Zachary J; Ikura, Juntaro; Laberge, Frédéric

    2013-11-01

    The present report investigated how fire-bellied toads (Bombina orientalis) modified their response in a prey catching task in which the attribution of food reward was contingent on snapping toward a visual stimulus of moving prey displayed on a computer screen. Two experiments investigated modification of the snapping response, with different intervals between the opportunity to snap at the visual stimulus and reward administration. The snapping response of unpaired controls was decreased compared with the conditioned toads when hour or day intervals were used, but intervals of 5 min produced only minimal change in snapping. The determinants of extinction of the response toward the visual stimulus were then investigated in 3 experiments. The results of the first experiment suggested that increased resistance to extinction depended mostly on the number of training trials, not on partial reinforcement or the magnitude of reinforcement during training. This was confirmed in a second experiment showing that overtraining resulted in resistance to extinction, and that the pairing of the reward with a response toward the stimulus was necessary for that effect, as opposed to pairing reward solely with the experimental context. The last experiment showed that the time elapsed between training trials also influenced extinction, but only in toads that received few training trials. Overall, the results suggest that toads learning about a prey stimulus progress from an early flexible phase, when an action can be modified by its consequences, to an acquired habit characterized by an increasingly inflexible and automatic response.

  9. Stimulus value signals in ventromedial PFC reflect the integration of attribute value signals computed in fusiform gyrus and posterior superior temporal gyrus.

    PubMed

    Lim, Seung-Lark; O'Doherty, John P; Rangel, Antonio

    2013-05-15

    We often have to make choices among multiattribute stimuli (e.g., a food that differs on its taste and health). Behavioral data suggest that choices are made by computing the value of the different attributes and then integrating them into an overall stimulus value signal. However, it is not known whether this theory describes the way the brain computes the stimulus value signals, or how the underlying computations might be implemented. We investigated these questions using a human fMRI task in which individuals had to evaluate T-shirts that varied in their visual esthetic (e.g., color) and semantic (e.g., meaning of logo printed in T-shirt) components. We found that activity in the fusiform gyrus, an area associated with the processing of visual features, correlated with the value of the visual esthetic attributes, but not with the value of the semantic attributes. In contrast, activity in posterior superior temporal gyrus, an area associated with the processing of semantic meaning, exhibited the opposite pattern. Furthermore, both areas exhibited functional connectivity with an area of ventromedial prefrontal cortex that reflects the computation of overall stimulus values at the time of decision. The results provide supporting evidence for the hypothesis that some attribute values are computed in cortical areas specialized in the processing of such features, and that those attribute-specific values are then passed to the vmPFC to be integrated into an overall stimulus value signal to guide the decision.

  10. The Status of Rapid Response Learning in Aging

    PubMed Central

    Dew, Ilana T. Z.; Giovanello, Kelly S.

    2010-01-01

    Strong evidence exists for an age-related impairment in associative processing under intentional encoding and retrieval conditions, but the status of incidental associative processing has been less clear. Two experiments examined the effects of age on rapid response learning – the incidentally learned stimulus-response association that results in a reduction in priming when a learned response becomes inappropriate for a new task. Specifically, we tested whether priming was equivalently sensitive in both age groups to reversing the task-specific decision cue. Experiment 1 showed that cue inversion reduced priming in both age groups using a speeded inside/outside classification task, and in Experiment 2 cue inversion eliminated priming on an associative version of this task. Thus, the ability to encode an association between a stimulus and its initial task-specific response appears to be preserved in aging. These findings provide an important example of a form of associative processing that is unimpaired in older adults. PMID:20853961

  11. Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT

    PubMed Central

    Xiao, Jianbo

    2015-01-01

    Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869

  12. Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque

    PubMed Central

    Kaneko, Takaaki; Saleem, Kadharbatcha S.; Berman, Rebecca A.; Leopold, David A.

    2016-01-01

    Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. SIGNIFICANCE STATEMENT Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This “reafferent” motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. PMID:27629710

  13. Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque.

    PubMed

    Russ, Brian E; Kaneko, Takaaki; Saleem, Kadharbatcha S; Berman, Rebecca A; Leopold, David A

    2016-09-14

    Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This "reafferent" motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. Copyright © 2016 the authors 0270-6474/16/369580-10$15.00/0.

  14. Human postural responses to motion of real and virtual visual environments under different support base conditions.

    PubMed

    Mergner, T; Schweigart, G; Maurer, C; Blümle, A

    2005-12-01

    The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.

  15. Attention improves encoding of task-relevant features in the human visual cortex

    PubMed Central

    Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank

    2011-01-01

    When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942

  16. Altered modulation of gamma oscillation frequency by speed of visual motion in children with autism spectrum disorders.

    PubMed

    Stroganova, Tatiana A; Butorina, Anna V; Sysoeva, Olga V; Prokofyev, Andrey O; Nikolaeva, Anastasia Yu; Tsetlin, Marina M; Orekhova, Elena V

    2015-01-01

    Recent studies link autism spectrum disorders (ASD) with an altered balance between excitation and inhibition (E/I balance) in cortical networks. The brain oscillations in high gamma-band (50-120 Hz) are sensitive to the E/I balance and may appear useful biomarkers of certain ASD subtypes. The frequency of gamma oscillations is mediated by level of excitation of the fast-spiking inhibitory basket cells recruited by increasing strength of excitatory input. Therefore, the experimental manipulations affecting gamma frequency may throw light on inhibitory networks dysfunction in ASD. Here, we used magnetoencephalography (MEG) to investigate modulation of visual gamma oscillation frequency by speed of drifting annular gratings (1.2, 3.6, 6.0 °/s) in 21 boys with ASD and 26 typically developing boys aged 7-15 years. Multitaper method was used for analysis of spectra of gamma power change upon stimulus presentation and permutation test was applied for statistical comparisons. We also assessed in our participants visual orientation discrimination thresholds, which are thought to depend on excitability of inhibitory networks in the visual cortex. Although frequency of the oscillatory gamma response increased with increasing velocity of visual motion in both groups of participants, the velocity effect was reduced in a substantial proportion of children with ASD. The range of velocity-related gamma frequency modulation correlated inversely with the ability to discriminate oblique line orientation in the ASD group, while no such correlation has been observed in the group of typically developing participants. Our findings suggest that abnormal velocity-related gamma frequency modulation in ASD may constitute a potential biomarker for reduced excitability of fast-spiking inhibitory neurons in a subset of children with ASD.

  17. Out of sight but not out of mind: the neurophysiology of iconic memory in the superior temporal sulcus.

    PubMed

    Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I

    2005-05-01

    Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.

  18. Exploring conflict- and target-related movement of visual attention.

    PubMed

    Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas

    2014-01-01

    Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.

  19. Relational Learning in Children with Deafness and Cochlear Implants

    ERIC Educational Resources Information Center

    Almeida-Verdu, Ana Claudia; Huziwara, Edson M.; de Souza, Deisy G.; de Rose, Julio C.; Bevilacqua, Maria Cecilia; Lopes, Jair, Jr.; Alves, Cristiane O.; McIlvane, William J.

    2008-01-01

    This four-experiment series sought to evaluate the potential of children with neurosensory deafness and cochlear implants to exhibit auditory-visual and visual-visual stimulus equivalence relations within a matching-to-sample format. Twelve children who became deaf prior to acquiring language (prelingual) and four who became deaf afterwards…

  20. The Extraction of Information From Visual Persistence

    ERIC Educational Resources Information Center

    Erwin, Donald E.

    1976-01-01

    This research sought to distinguish among three concepts of visual persistence by substituting the physical presence of the target stimulus while simultaneously inhibiting the formation of a persisting representation. Reportability of information about the stimuli was compared to a condition in which visual persistence was allowed to fully develop…

  1. The dynamic-stimulus advantage of visual symmetry perception.

    PubMed

    Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko

    2008-09-01

    It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.

  2. The extreme relativity of perception: A new contextual effect modulates human resolving power.

    PubMed

    Namdar, Gal; Ganel, Tzvi; Algom, Daniel

    2016-04-01

    The authors report the discovery of a new effect of context that modulates human resolving power with respect to an individual stimulus. They show that the size of the difference threshold or the just noticeable difference around a standard stimulus depends on the range of the other standards tested simultaneously for resolution within the same experimental session. The larger this range, the poorer the resolving power for a given standard. The authors term this effect the range of standards effect (RSE). They establish this result both in the visual domain for the perception of linear extent, and in the somatosensory domain for the perception of weight. They discuss the contingent nature of stimulus resolution in perception and psychophysics and contrast it with the immunity to contextual influences of visually guided action. (c) 2016 APA, all rights reserved).

  3. Population Response Profiles in Early Visual Cortex Are Biased in Favor of More Valuable Stimuli

    PubMed Central

    Saproo, Sameer

    2010-01-01

    Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population response profile could theoretically improve the probability of correctly discriminating high-value stimuli from low-value alternatives. PMID:20410360

  4. Effects of age and eccentricity on visual target detection.

    PubMed

    Gruber, Nicole; Müri, René M; Mosimann, Urs P; Bieri, Rahel; Aeschimann, Andrea; Zito, Giuseppe A; Urwyler, Prabitha; Nyffeler, Thomas; Nef, Tobias

    2013-01-01

    The aim of this study was to examine the effects of aging and target eccentricity on a visual search task comprising 30 images of everyday life projected into a hemisphere, realizing a ±90° visual field. The task performed binocularly allowed participants to freely move their eyes to scan images for an appearing target or distractor stimulus (presented at 10°; 30°, and 50° eccentricity). The distractor stimulus required no response, while the target stimulus required acknowledgment by pressing the response button. One hundred and seventeen healthy subjects (mean age = 49.63 years, SD = 17.40 years, age range 20-78 years) were studied. The results show that target detection performance decreases with age as well as with increasing eccentricity, especially for older subjects. Reaction time also increases with age and eccentricity, but in contrast to target detection, there is no interaction between age and eccentricity. Eye movement analysis showed that younger subjects exhibited a passive search strategy while older subjects exhibited an active search strategy probably as a compensation for their reduced peripheral detection performance.

  5. Interaction of aberrations, diffraction, and quantal fluctuations determine the impact of pupil size on visual quality.

    PubMed

    Xu, Renfeng; Wang, Huachun; Thibos, Larry N; Bradley, Arthur

    2017-04-01

    Our purpose is to develop a computational approach that jointly assesses the impact of stimulus luminance and pupil size on visual quality. We compared traditional optical measures of image quality and those that incorporate the impact of retinal illuminance dependent neural contrast sensitivity. Visually weighted image quality was calculated for a presbyopic model eye with representative levels of chromatic and monochromatic aberrations as pupil diameter was varied from 7 to 1 mm, stimulus luminance varied from 2000 to 0.1  cd/m2, and defocus varied from 0 to -2 diopters. The model included the effects of quantal fluctuations on neural contrast sensitivity. We tested the model's predictions for five cycles per degree gratings by measuring contrast sensitivity at 5  cyc/deg. Unlike the traditional Strehl ratio and the visually weighted area under the modulation transfer function, the visual Strehl ratio derived from the optical transfer function was able to capture the combined impact of optics and quantal noise on visual quality. In a well-focused eye, provided retinal illuminance is held constant as pupil size varies, visual image quality scales approximately as the square root of illuminance because of quantum fluctuations, but optimum pupil size is essentially independent of retinal illuminance and quantum fluctuations. Conversely, when stimulus luminance is held constant (and therefore illuminance varies with pupil size), optimum pupil size increases as luminance decreases, thereby compensating partially for increased quantum fluctuations. However, in the presence of -1 and -2 diopters of defocus and at high photopic levels where Weber's law operates, optical aberrations and diffraction dominate image quality and pupil optimization. Similar behavior was observed in human observers viewing sinusoidal gratings. Optimum pupil size increases as stimulus luminance drops for the well-focused eye, and the benefits of small pupils for improving defocused image quality remain throughout the photopic and mesopic ranges. However, restricting pupils to <2  mm will cause significant reductions in the best focus vision at low photopic and mesopic luminances.

  6. Selection for associative learning of colour stimuli reveals correlated evolution of this learning ability across multiple stimuli and rewards.

    PubMed

    Liefting, Maartje; Hoedjes, Katja M; Lann, Cécile Le; Smid, Hans M; Ellers, Jacintha

    2018-05-16

    We are only starting to understand how variation in cognitive ability can result from local adaptations to environmental conditions. A major question in this regard is to what extent selection on cognitive ability in a specific context affects that ability in general through correlated evolution. To address this question we performed artificial selection on visual associative learning in female Nasonia vitripennis wasps. Using appetitive conditioning in which a visual stimulus was offered in association with a host reward, the ability to learn visual associations was enhanced within 10 generations of selection. To test for correlated evolution affecting this form of learning, the ability to readily form learned associations in females was also tested using an olfactory instead of a visual stimulus in the appetitive conditioning. Additionally, we assessed whether the improved associative learning ability was expressed across sexes by colour-conditioning males with a mating reward. Both females and males from the selected lines consistently demonstrated an increased associative learning ability compared to the control lines, independent of learning context or conditioned stimulus. No difference in relative volume of brain neuropils was detected between the selected and control lines. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    NASA Astrophysics Data System (ADS)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  8. Stimulus similarity determines the prevalence of behavioral laterality in a visual discrimination task for mice

    PubMed Central

    Treviño, Mario

    2014-01-01

    Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257

  9. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting

    PubMed Central

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579

  10. Visual-somatosensory integration in aging: Does stimulus location really matter?

    PubMed Central

    MAHONEY, JEANNETTE R.; WANG, CUILING; DUMAS, KRISTINA; HOLTZER, ROEE

    2014-01-01

    Individuals are constantly bombarded by sensory stimuli across multiple modalities that must be integrated efficiently. Multisensory integration (MSI) is said to be governed by stimulus properties including space, time, and magnitude. While there is a paucity of research detailing MSI in aging, we have demonstrated that older adults reveal the greatest reaction time (RT) benefi t when presented with simultaneous visual-somatosensory (VS) stimuli. To our knowledge, the differential RT benefit of visual and somatosensory stimuli presented within and across spatial hemifields has not been investigated in aging. Eighteen older adults (Mean = 74 years; 11 female), who were determined to be non-demented and without medical or psychiatric conditions that may affect their performance, participated in this study. Participants received eight randomly presented stimulus conditions (four unisensory and four multisensory) and were instructed to make speeded foot-pedal responses as soon as they detected any stimulation, regardless of stimulus type and location of unisensory inputs. Results from a linear mixed effect model, adjusted for speed of processing and other covariates, revealed that RTs to all multisensory pairings were significantly faster than those elicited to averaged constituent unisensory conditions (p < 0.01). Similarly, race model violation did not differ based on unisensory spatial location (p = 0.41). In summary, older adults demonstrate significant VS multisensory RT effects to stimuli both within and across spatial hemifields. PMID:24698637

  11. Attention changes perceived size of moving visual patterns.

    PubMed

    Anton-Erxleben, Katharina; Henrich, Christian; Treue, Stefan

    2007-08-23

    Spatial attention shifts receptive fields in monkey extrastriate visual cortex toward the focus of attention (S. Ben Hamed, J. R. Duhamel, F. Bremmer, & W. Graf, 2002; C. E. Connor, J. L. Gallant, D. C. Preddie, & D. C. Van Essen, 1996; C. E. Connor, D. C. Preddie, J. L. Gallant, & D. C. Van Essen, 1997; T. Womelsdorf, K. Anton-Erxleben, F. Pieper, & S. Treue, 2006). This distortion in the retinotopic distribution of receptive fields might cause distortions in spatial perception such as an increase of the perceived size of attended stimuli. Here we test for such an effect in human subjects by measuring the point of subjective equality (PSE) for the perceived size of a neutral and an attended stimulus when drawing automatic attention to one of two spatial locations. We found a significant increase in perceived size of attended stimuli. Depending on the absolute stimulus size, this effect ranged from 4% to 12% and was more pronounced for smaller than for larger stimuli. In our experimental design, an attentional effect on task difficulty or a cue bias might influence the PSE measure. We performed control experiments and indeed found such effects, but they could only account for part of the observed results. Our findings demonstrate that the allocation of transient spatial attention onto a visual stimulus increases its perceived size and additionally biases subjects to select this stimulus for a perceptual judgment.

  12. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  13. Remembering Complex Objects in Visual Working Memory: Do Capacity Limits Restrict Objects or Features?

    PubMed Central

    Hardman, Kyle; Cowan, Nelson

    2014-01-01

    Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739

  14. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  15. Design of a 3-dimensional visual illusion speed reduction marking scheme.

    PubMed

    Liang, Guohua; Qian, Guomin; Wang, Ye; Yi, Zige; Ru, Xiaolei; Ye, Wei

    2017-03-01

    To determine which graphic and color combination for a 3-dimensional visual illusion speed reduction marking scheme presents the best visual stimulus, five parameters were designed. According to the Balanced Incomplete Blocks-Law of Comparative Judgment, three schemes, which produce strong stereoscopic impressions, were screened from the 25 initial design schemes of different combinations of graphics and colors. Three-dimensional experimental simulation scenes of the three screened schemes were created to evaluate four different effects according to a semantic analysis. The following conclusions were drawn: schemes with a red color are more effective than those without; the combination of red, yellow and blue produces the best visual stimulus; a larger area from the top surface and the front surface should be colored red; and a triangular prism should be painted as the graphic of the marking according to the stereoscopic impression and the coordination of graphics with the road.

  16. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    PubMed

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  17. Transient visual responses reset the phase of low-frequency oscillations in the skeletomotor periphery.

    PubMed

    Wood, Daniel K; Gu, Chao; Corneil, Brian D; Gribble, Paul L; Goodale, Melvyn A

    2015-08-01

    We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    PubMed

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  19. Behavioral Vision Training for Myopia: Stimulus Specificity of Training Effects.

    ERIC Educational Resources Information Center

    Leung, Jin-Pang

    1988-01-01

    The study assessed transfer of visual training for myopia using two different training stimuli and a single subject A-B-C-A design with a male student volunteer. A procedure including stimulus fading and reinforcement (positive verbal feedback) was used to effectively improve performance on both behavioral acuity tests during the training phases…

  20. Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats

    ERIC Educational Resources Information Center

    Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.

    2007-01-01

    The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…

  1. Contextual Control by Function and Form of Transfer of Functions

    ERIC Educational Resources Information Center

    Perkins, David R.; Dougher, Michael J.; Greenway, David E.

    2007-01-01

    This study investigated conditions leading to contextual control by stimulus topography over transfer of functions. Three 4-member stimulus equivalence classes, each consisting of four (A, B, C, D) topographically distinct visual stimuli, were established for 5 college students. Across classes, designated A stimuli were open-ended linear figures,…

  2. Imitation in Infancy: The Wealth of the Stimulus

    ERIC Educational Resources Information Center

    Ray, Elizabeth; Heyes, Cecilia

    2011-01-01

    Imitation requires the imitator to solve the correspondence problem--to translate visual information from modelled action into matching motor output. It has been widely accepted for some 30 years that the correspondence problem is solved by a specialized, innate cognitive mechanism. This is the conclusion of a poverty of the stimulus argument,…

  3. Teaching Identity Matching of Braille Characters to Beginning Braille Readers

    ERIC Educational Resources Information Center

    Toussaint, Karen A.; Scheithauer, Mindy C.; Tiger, Jeffrey H.; Saunders, Kathryn J.

    2017-01-01

    We taught three children with visual impairments to make tactile discriminations of the braille alphabet within a matching-to-sample format. That is, we presented participants with a braille character as a sample stimulus, and they selected the matching stimulus from a three-comparison array. In order to minimize participant errors, we initially…

  4. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  5. Flicker Adaptation of Low-Level Cortical Visual Neurons Contributes to Temporal Dilation

    ERIC Educational Resources Information Center

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Nonsensory factors, such as increased arousal and attention, have been thought to mediate this flicker-based temporal-dilation aftereffect. In this study, we provide evidence that adaptation of low-level cortical visual…

  6. "Tunnel Vision": A Possible Keystone Stimulus Control Deficit in Autistic Children.

    ERIC Educational Resources Information Center

    Rincover, Arnold; And Others

    1986-01-01

    Three autistic boys (ages 9-13) were trained to select a card containing a stimulus array comprised of three visual cues. Decreased distance between cues resulted in responses to more cues, increased distance to fewer cues. Distances did not affect the responding of children matched for mental and chronological age. (Author/JW)

  7. Audiovisual semantic congruency during encoding enhances memory performance.

    PubMed

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  8. The role of visual perception measures used in sports vision programmes in predicting actual game performance in Division I collegiate hockey players.

    PubMed

    Poltavski, Dmitri; Biberdorf, David

    2015-01-01

    Abstract In the growing field of sports vision little is still known about unique attributes of visual processing in ice hockey and what role visual processing plays in the overall athlete's performance. In the present study we evaluated whether visual, perceptual and cognitive/motor variables collected using the Nike SPARQ Sensory Training Station have significant relevance to the real game statistics of 38 Division I collegiate male and female hockey players. The results demonstrated that 69% of variance in the goals made by forwards in 2011-2013 could be predicted by their faster reaction time to a visual stimulus, better visual memory, better visual discrimination and a faster ability to shift focus between near and far objects. Approximately 33% of variance in game points was significantly related to better discrimination among competing visual stimuli. In addition, reaction time to a visual stimulus as well as stereoptic quickness significantly accounted for 24% of variance in the mean duration of the player's penalty time. This is one of the first studies to show that some of the visual skills that state-of-the-art generalised sports vision programmes are purported to target may indeed be important for hockey players' actual performance on the ice.

  9. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  10. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  11. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  12. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  13. Temporal expectancy in the context of a theory of visual attention

    PubMed Central

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-01-01

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue–stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s−1) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations. PMID:24018716

  14. Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory.

    PubMed

    Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk

    2015-01-01

    Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.

  15. Inhibition of voluntary saccadic eye movement commands by abrupt visual onsets.

    PubMed

    Edelman, Jay A; Xu, Kitty Z

    2009-03-01

    Saccadic eye movements are made both to explore the visual world and to react to sudden sensory events. We studied the ability for humans to execute a voluntary (i.e., nonstimulus-driven) saccade command in the face of a suddenly appearing visual stimulus. Subjects were required to make a saccade to a memorized location when a central fixation point disappeared. At varying times relative to fixation point disappearance a visual distractor appeared at a random location. When the distractor appeared at locations distant from the target virtually no saccades were initiated in a 30- to 40-ms interval beginning 70-80 ms after appearance of the distractor. If the distractor was presented slightly earlier relative to saccade initiation then saccades tended to have smaller amplitudes, with velocity profiles suggesting that the distractor terminated them prematurely. In contrast, distractors appearing close to the saccade target elicited express saccade-like movements 70-100 ms after their appearance, although the saccade endpoint was generally scarcely affected by the distractor. An additional experiment showed that these effects were weaker when the saccade was made to a visible target in a delayed task and still weaker when the saccade itself was made in response to the abrupt appearance of a visual stimulus. A final experiment revealed that the effect is smaller, but quite evident, for very small stimuli. These results suggest that the transient component of a visual response can briefly but almost completely suppress a voluntary saccade command, but only when the stimulus evoking that response is distant from the saccade goal.

  16. Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong

    2011-01-01

    Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during…

  17. Effects of muscarinic blockade in perirhinal cortex during visual recognition

    PubMed Central

    Tang, Yi; Mishkin, Mortimer; Aigner, Thomas G.

    1997-01-01

    Stimulus recognition in monkeys is severely impaired by destruction or dysfunction of the perirhinal cortex and also by systemic administration of the cholinergic-muscarinic receptor blocker, scopolamine. These two effects are shown here to be linked: Stimulus recognition was found to be significantly impaired after bilateral microinjection of scopolamine directly into the perirhinal cortex, but not after equivalent injections into the laterally adjacent visual area TE or into the dentate gyrus of the overlying hippocampal formation. The results suggest that the formation of stimulus memories depends critically on cholinergic-muscarinic activation of the perirhinal area, providing a new clue to how stimulus representations are stored. PMID:9356507

  18. Preserved local but disrupted contextual figure-ground influences in an individual with abnormal function of intermediate visual areas

    PubMed Central

    Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon

    2012-01-01

    Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116

  19. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging

    PubMed Central

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-01-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629

  20. Color is processed less efficiently than orientation in change detection but more efficiently in visual search.

    PubMed

    Huang, Liqiang

    2015-05-01

    Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. © The Author(s) 2015.

  1. Neural mechanisms underlying sensitivity to reverse-phi motion in the fly

    PubMed Central

    Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander

    2017-01-01

    Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics. PMID:29261684

  2. Neural mechanisms underlying sensitivity to reverse-phi motion in the fly.

    PubMed

    Leonhardt, Aljoscha; Meier, Matthias; Serbe, Etienne; Eichner, Hubert; Borst, Alexander

    2017-01-01

    Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics.

  3. Scene Integration Without Awareness: No Conclusive Evidence for Processing Scene Congruency During Continuous Flash Suppression.

    PubMed

    Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan

    2016-07-01

    A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.

  4. Rimonabant effects on anxiety induced by simulated public speaking in healthy humans: a preliminary report.

    PubMed

    Bergamaschi, Mateus M; Queiroz, Regina H C; Chagas, Marcos H N; Linares, Ila M P; Arrais, Kátia C; de Oliveira, Danielle C G; Queiroz, Maria E; Nardi, Antonio E; Huestis, Marilyn A; Hallak, Jaime E C; Zuardi, Antonio W; Moreira, Fabrício A; Crippa, José A S

    2014-01-01

    We investigated the hypothesis that rimonabant, a cannabinoid antagonist/inverse agonist, would increase anxiety in healthy subjects during a simulation of the public speaking test. Participants were randomly allocated to receive oral placebo or 90 mg rimonabant in a double-blind design. Subjective effects were measured by Visual Analogue Mood Scale. Physiological parameters, namely arterial blood pressure and heart rate, also were monitored. Twelve participants received oral placebo and 12 received 90 mg rimonabant. Rimonabant increased self-reported anxiety levels during the anticipatory speech and performance phase compared with placebo. Interestingly, rimonabant did not modulate anxiety prestress and was not associated with sedation, cognitive impairment, discomfort, or blood pressure changes. Cannabinoid-1 antagonism magnifies the responses to an anxiogenic stimulus without interfering with the prestress phase. These data suggest that the endocannabinoid system may work on-demand to counteract the consequences of anxiogenic stimuli in healthy humans. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Rimonabant effects on anxiety induced by simulated public speaking in healthy humans: a preliminary report

    PubMed Central

    Bergamaschi, Mateus M.; Queiroz, Regina H. C.; Chagas, Marcos H. N.; Linares, Ila M. P.; Arrais, Kátia C.; de Oliveira, Danielle C. G.; Queiroz, Maria E.; Nardi, Antonio E.; Huestis, Marilyn A.; Hallak, Jaime E. C.; Zuardi, Antonio W.; Moreira, Fabrício A.; Crippa, José A. S.

    2015-01-01

    Objective We investigated the hypothesis that rimonabant, a cannabinoid antagonist/inverse agonist, would increase anxiety in healthy subjects during a simulation of the public speaking test. Methods Participants were randomly allocated to receive oral placebo or 90 mg rimonabant in a double-blind design. Subjective effects were measured by Visual Analogue Mood Scale. Physiological parameters, namely arterial blood pressure and heart rate, also were monitored. Results Twelve participants received oral placebo and 12 received 90 mg rimonabant. Rimonabant increased self-reported anxiety levels during the anticipatory speech and performance phase compared with placebo. Interestingly, rimonabant did not modulate anxiety prestress and was not associated with sedation, cognitive impairment, discomfort, or blood pressure changes. Conclusions Cannabinoid-1 antagonism magnifies the responses to an anxiogenic stimulus without interfering with the prestress phase. These data suggest that the endocannabinoid system may work on-demand to counteract the consequences of anxiogenic stimuli in healthy humans. PMID:24424711

  6. Awareness of Emotional Stimuli Determines the Behavioral Consequences of Amygdala Activation and Amygdala-Prefrontal Connectivity

    PubMed Central

    Lapate, R. C.; Rokers, B.; Tromp, D. P. M.; Orfali, N. S.; Oler, J. A.; Doran, S. T.; Adluru, N.; Alexander, A. L.; Davidson, R. J.

    2016-01-01

    Conscious awareness of negative cues is thought to enhance emotion-regulatory capacity, but the neural mechanisms underlying this effect are unknown. Using continuous flash suppression (CFS) in the MRI scanner, we manipulated visual awareness of fearful faces during an affect misattribution paradigm, in which preferences for neutral objects can be biased by the valence of a previously presented stimulus. The amygdala responded to fearful faces independently of awareness. However, when awareness of fearful faces was prevented, individuals with greater amygdala responses displayed a negative bias toward unrelated novel neutral faces. In contrast, during the aware condition, inverse coupling between the amygdala and prefrontal cortex reduced this bias, particularly among individuals with higher structural connectivity in the major white matter pathway connecting the prefrontal cortex and amygdala. Collectively, these results indicate that awareness promotes the function of a critical emotion-regulatory network targeting the amygdala, providing a mechanistic account for the role of awareness in emotion regulation. PMID:27181344

  7. Sensory Prioritization in Rats: Behavioral Performance and Neuronal Correlates.

    PubMed

    Lee, Conrad C Y; Diamond, Mathew E; Arabzadeh, Ehsan

    2016-03-16

    Operating with some finite quantity of processing resources, an animal would benefit from prioritizing the sensory modality expected to provide key information in a particular context. The present study investigated whether rats dedicate attentional resources to the sensory modality in which a near-threshold event is more likely to occur. We manipulated attention by controlling the likelihood with which a stimulus was presented from one of two modalities. In a whisker session, 80% of trials contained a brief vibration stimulus applied to whiskers and the remaining 20% of trials contained a brief change of luminance. These likelihoods were reversed in a visual session. When a stimulus was presented in the high-likelihood context, detection performance increased and was faster compared with the same stimulus presented in the low-likelihood context. Sensory prioritization was also reflected in neuronal activity in the vibrissal area of primary somatosensory cortex: single units responded differentially to the whisker vibration stimulus when presented with higher probability compared with lower probability. Neuronal activity in the vibrissal cortex displayed signatures of multiplicative gain control and enhanced response to vibration stimuli during the whisker session. In conclusion, rats allocate priority to the more likely stimulus modality and the primary sensory cortex may participate in the redistribution of resources. Detection of low-amplitude events is critical to survival; for example, to warn prey of predators. To formulate a response, decision-making systems must extract minute neuronal signals from the sensory modality that provides key information. Here, we identify the behavioral and neuronal correlates of sensory prioritization in rats. Rats were trained to detect whisker vibrations or visual flickers. Stimuli were embedded in two contexts in which either visual or whisker modality was more likely to occur. When a stimulus was presented in the high-likelihood context, detection was faster and more reliable. Neuronal recording from the vibrissal cortex revealed enhanced representation of vibrations in the prioritized context. These results establish the rat as an alternative model organism to primates for studying attention. Copyright © 2016 the authors 0270-6474/16/363243-11$15.00/0.

  8. Testing Neuronal Accounts of Anisotropic Motion Perception with Computational Modelling

    PubMed Central

    Wong, William; Chiang Price, Nicholas Seow

    2014-01-01

    There is an over-representation of neurons in early visual cortical areas that respond most strongly to cardinal (horizontal and vertical) orientations and directions of visual stimuli, and cardinal- and oblique-preferring neurons are reported to have different tuning curves. Collectively, these neuronal anisotropies can explain two commonly-reported phenomena of motion perception – the oblique effect and reference repulsion – but it remains unclear whether neuronal anisotropies can simultaneously account for both perceptual effects. We show in psychophysical experiments that reference repulsion and the oblique effect do not depend on the duration of a moving stimulus, and that brief adaptation to a single direction simultaneously causes a reference repulsion in the orientation domain, and the inverse of the oblique effect in the direction domain. We attempted to link these results to underlying neuronal anisotropies by implementing a large family of neuronal decoding models with parametrically varied levels of anisotropy in neuronal direction-tuning preferences, tuning bandwidths and spiking rates. Surprisingly, no model instantiation was able to satisfactorily explain our perceptual data. We argue that the oblique effect arises from the anisotropic distribution of preferred directions evident in V1 and MT, but that reference repulsion occurs separately, perhaps reflecting a process of categorisation occurring in higher-order cortical areas. PMID:25409518

  9. Stimulation of the substantia nigra influences the specification of memory-guided saccades

    PubMed Central

    Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel

    2013-01-01

    In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects occurred with manipulation of nigral activity well before the initiation of saccades and in trials in which the visual stimulus was absent, we conclude that information from the basal ganglia influences the specification of an action as it is evolving primarily during performance of memory-guided saccades. When visual information is available to guide the specification of the saccade, as occurs during visually guided saccades, basal ganglia information is less influential. PMID:24259551

  10. The role of meaning in contextual cueing: evidence from chess expertise.

    PubMed

    Brockmole, James R; Hambrick, David Z; Windisch, David J; Henderson, John M

    2008-01-01

    In contextual cueing, the position of a search target is learned over repeated exposures to a visual display. The strength of this effect varies across stimulus types. For example, real-world scene contexts give rise to larger search benefits than contexts composed of letters or shapes. We investigated whether such differences in learning can be at least partially explained by the degree of semantic meaning associated with a context independently of the nature of the visual information available (which also varies across stimulus types). Chess boards served as the learning context as their meaningfulness depends on the observer's knowledge of the game. In Experiment 1, boards depicted actual game play, and search benefits for repeated boards were 4 times greater for experts than for novices. In Experiment 2, search benefits among experts were halved when less meaningful randomly generated boards were used. Thus, stimulus meaningfulness independently contributes to learning context-target associations.

  11. Evidence for top-down control of eye movements during visual decision making.

    PubMed

    Glaholt, Mackenzie G; Wu, Mei-Chun; Reingold, Eyal M

    2010-05-01

    Participants' eye movements were monitored while they viewed displays containing 6 exemplars from one of several categories of everyday items (belts, sunglasses, shirts, shoes), with a column of 3 items presented on the left and another column of 3 items presented on the right side of the display. Participants were either required to choose which of the two sets of 3 items was the most expensive (2-AFC) or which of the 6 items was the most expensive (6-AFC). Importantly, the stimulus display, and the relevant stimulus dimension, were held constant across conditions. Consistent with the hypothesis of top-down control of eye movements during visual decision making, we documented greater selectivity in the processing of stimulus information in the 6-AFC than the 2-AFC decision. In addition, strong spatial biases in looking behavior were demonstrated, but these biases were largely insensitive to the instructional manipulation, and did not substantially influence participants' choices.

  12. School-aged children can benefit from audiovisual semantic congruency during memory encoding.

    PubMed

    Heikkilä, Jenni; Tiippana, Kaisa

    2016-05-01

    Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.

  13. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  14. Neural entrainment to rhythmic speech in children with developmental dyslexia

    PubMed Central

    Power, Alan J.; Mead, Natasha; Barnes, Lisa; Goswami, Usha

    2013-01-01

    A rhythmic paradigm based on repetition of the syllable “ba” was used to study auditory, visual, and audio-visual oscillatory entrainment to speech in children with and without dyslexia using EEG. Children pressed a button whenever they identified a delay in the isochronous stimulus delivery (500 ms; 2 Hz delta band rate). Response power, strength of entrainment and preferred phase of entrainment in the delta and theta frequency bands were compared between groups. The quality of stimulus representation was also measured using cross-correlation of the stimulus envelope with the neural response. The data showed a significant group difference in the preferred phase of entrainment in the delta band in response to the auditory and audio-visual stimulus streams. A different preferred phase has significant implications for the quality of speech information that is encoded neurally, as it implies enhanced neuronal processing (phase alignment) at less informative temporal points in the incoming signal. Consistent with this possibility, the cross-correlogram analysis revealed superior stimulus representation by the control children, who showed a trend for larger peak r-values and significantly later lags in peak r-values compared to participants with dyslexia. Significant relationships between both peak r-values and peak lags were found with behavioral measures of reading. The data indicate that the auditory temporal reference frame for speech processing is atypical in developmental dyslexia, with low frequency (delta) oscillations entraining to a different phase of the rhythmic syllabic input. This would affect the quality of encoding of speech, and could underlie the cognitive impairments in phonological representation that are the behavioral hallmark of this developmental disorder across languages. PMID:24376407

  15. fMRI during natural sleep as a method to study brain function during early childhood.

    PubMed

    Redcay, Elizabeth; Kennedy, Daniel P; Courchesne, Eric

    2007-12-01

    Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.

  16. Dynamics of normalization underlying masking in human visual cortex.

    PubMed

    Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M

    2012-02-22

    Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.

  17. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    PubMed

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  18. Rebalancing Spatial Attention: Endogenous Orienting May Partially Overcome the Left Visual Field Bias in Rapid Serial Visual Presentation.

    PubMed

    Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf

    2017-01-01

    In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.

  19. You prime what you code: The fAIM model of priming of pop-out

    PubMed Central

    Meeter, Martijn

    2017-01-01

    Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as ‘color’ and for less obvious dimensions such as ‘spikiness’ of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers’ goals—without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search. PMID:29166386

  20. Elevating Endogenous GABA Levels with GAT-1 Blockade Modulates Evoked but Not Induced Responses in Human Visual Cortex

    PubMed Central

    Muthukumaraswamy, Suresh D; Myers, Jim F M; Wilson, Sue J; Nutt, David J; Hamandi, Khalid; Lingford-Hughes, Anne; Singh, Krish D

    2013-01-01

    The electroencephalographic/magnetoencephalographic (EEG/MEG) signal is generated primarily by the summation of the postsynaptic currents of cortical principal cells. At a microcircuit level, these glutamatergic principal cells are reciprocally connected to GABAergic interneurons. Here we investigated the relative sensitivity of visual evoked and induced responses to altered levels of endogenous GABAergic inhibition. To do this, we pharmacologically manipulated the GABA system using tiagabine, which blocks the synaptic GABA transporter 1, and so increases endogenous GABA levels. In a single-blinded and placebo-controlled crossover study of 15 healthy participants, we administered either 15 mg of tiagabine or a placebo. We recorded whole-head MEG, while participants viewed a visual grating stimulus, before, 1, 3 and 5 h post tiagabine ingestion. Using beamformer source localization, we reconstructed responses from early visual cortices. Our results showed no change in either stimulus-induced gamma-band amplitude increases or stimulus-induced alpha amplitude decreases. However, the same data showed a 45% reduction in the evoked response component at ∼80 ms. These data demonstrate that, in early visual cortex the evoked response shows a greater sensitivity compared with induced oscillations to pharmacologically increased endogenous GABA levels. We suggest that previous studies correlating GABA concentrations as measured by magnetic resonance spectroscopy to gamma oscillation frequency may reflect underlying variations such as interneuron/inhibitory synapse density rather than functional synaptic GABA concentrations. PMID:23361120

  1. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    PubMed

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  2. Behavioral analysis of signals that guide learned changes in the amplitude and dynamics of the vestibulo-ocular reflex.

    PubMed

    Raymond, J L; Lisberger, S G

    1996-12-01

    We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.

  3. Visual Evoked Cortical Potential (VECP) Elicited by Sinusoidal Gratings Controlled by Pseudo-Random Stimulation

    PubMed Central

    Araújo, Carolina S.; Souza, Givago S.; Gomes, Bruno D.; Silveira, Luiz Carlos L.

    2013-01-01

    The contributions of contrast detection mechanisms to the visual cortical evoked potential (VECP) have been investigated studying the contrast-response and spatial frequency-response functions. Previously, the use of m-sequences for stimulus control has been almost restricted to multifocal electrophysiology stimulation and, in some aspects, it substantially differs from conventional VECPs. Single stimulation with spatial contrast temporally controlled by m-sequences has not been extensively tested or compared to multifocal techniques. Our purpose was to evaluate the influence of spatial frequency and contrast of sinusoidal gratings on the VECP elicited by pseudo-random stimulation. Nine normal subjects were stimulated by achromatic sinusoidal gratings driven by pseudo random binary m-sequence at seven spatial frequencies (0.4–10 cpd) and three stimulus sizes (4°, 8°, and 16° of visual angle). At 8° subtence, six contrast levels were used (3.12–99%). The first order kernel (K1) did not provide a consistent measurable signal across spatial frequencies and contrasts that were tested–signal was very small or absent–while the second order kernel first (K2.1) and second (K2.2) slices exhibited reliable responses for the stimulus range. The main differences between results obtained with the K2.1 and K2.2 were in the contrast gain as measured in the amplitude versus contrast and amplitude versus spatial frequency functions. The results indicated that K2.1 was dominated by M-pathway, but for some stimulus condition some P-pathway contribution could be found, while the second slice reflected the P-pathway contribution. The present work extended previous findings of the visual pathways contribution to VECP elicited by pseudorandom stimulation for a wider range of spatial frequencies. PMID:23940546

  4. Behavioral analysis of signals that guide learned changes in the amplitude and dynamics of the vestibulo-ocular reflex

    NASA Technical Reports Server (NTRS)

    Raymond, J. L.; Lisberger, S. G.

    1996-01-01

    We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.

  5. Neuronal responses to face-like stimuli in the monkey pulvinar.

    PubMed

    Nguyen, Minh Nui; Hori, Etsuro; Matsumoto, Jumpei; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    The pulvinar nuclei appear to function as the subcortical visual pathway that bypasses the striate cortex, rapidly processing coarse facial information. We investigated responses from monkey pulvinar neurons during a delayed non-matching-to-sample task, in which monkeys were required to discriminate five categories of visual stimuli [photos of faces with different gaze directions, line drawings of faces, face-like patterns (three dark blobs on a bright oval), eye-like patterns and simple geometric patterns]. Of 401 neurons recorded, 165 neurons responded differentially to the visual stimuli. These visual responses were suppressed by scrambling the images. Although these neurons exhibited a broad response latency distribution, face-like patterns elicited responses with the shortest latencies (approximately 50 ms). Multidimensional scaling analysis indicated that the pulvinar neurons could specifically encode face-like patterns during the first 50-ms period after stimulus onset and classify the stimuli into one of the five different categories during the next 50-ms period. The amount of stimulus information conveyed by the pulvinar neurons and the number of stimulus-differentiating neurons were consistently higher during the second 50-ms period than during the first 50-ms period. These results suggest that responsiveness to face-like patterns during the first 50-ms period might be attributed to ascending inputs from the superior colliculus or the retina, while responsiveness to the five different stimulus categories during the second 50-ms period might be mediated by descending inputs from cortical regions. These findings provide neurophysiological evidence for pulvinar involvement in social cognition and, specifically, rapid coarse facial information processing. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  6. Magnetoencephalographic responses to illusory figures: early evoked gamma is affected by processing of stimulus features.

    PubMed

    Herrmann, C S; Mecklinger, A

    2000-12-01

    We examined evoked and induced responses in event-related fields and gamma activity in the magnetoencephalogram (MEG) during a visual classification task. The objective was to investigate the effects of target classification and the different levels of discrimination between certain stimulus features. We performed two experiments, which differed only in the subjects' task while the stimuli were identical. In Experiment 1, subjects responded by a button-press to rare Kanizsa squares (targets) among Kanizsa triangles and non-Kanizsa figures (standards). This task requires the processing of both stimulus features (colinearity and number of inducer disks). In Experiment 2, the four stimuli of Experiment 1 were used as standards and the occurrence of an additional stimulus without any feature overlap with the Kanizsa stimuli (a rare and highly salient red fixation cross) had to be detected. Discrimination of colinearity and number of inducer disks was not necessarily required for task performance. We applied a wavelet-based time-frequency analysis to the data and calculated topographical maps of the 40 Hz activity. The early evoked gamma activity (100-200 ms) in Experiment 1 was higher for targets as compared to standards. In Experiment 2, no significant differences were found in the gamma responses to the Kanizsa figures and non-Kanizsa figures. This pattern of results suggests that early evoked gamma activity in response to visual stimuli is affected by the targetness of a stimulus and the need to discriminate between the features of a stimulus.

  7. Rescuing Stimuli from Invisibility: Inducing a Momentary Release from Visual Masking with Pre-Target Entrainment

    ERIC Educational Resources Information Center

    Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele; Beck, Diane M.; Lleras, Alejandro

    2010-01-01

    At near-threshold levels of stimulation, identical stimulus parameters can result in very different phenomenal experiences. Can we manipulate which stimuli reach consciousness? Here we show that consciousness of otherwise masked stimuli can be experimentally induced by sensory entrainment. We preceded a backward-masked stimulus with a series of…

  8. Stimulus-Driven Attentional Capture by a Static Discontinuity between Perceptual Groups

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Neely, James H.; Naginsky, Yelena; Thomas, Matthew

    2010-01-01

    After C. L. Folk, R. W. Remington, and J. C. Johnston (1992) proposed their contingent-orienting hypothesis, there has been an ongoing debate over whether purely stimulus-driven attentional capture can occur for visual events that are salient by virtue of a distinctive static property (as opposed to a dynamic property such as abrupt onset). The…

  9. Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD

    ERIC Educational Resources Information Center

    Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne

    2015-01-01

    We recorded visual event-related brain potentials from 32 adult male participants (16 high-functioning participants diagnosed with autism spectrum disorder (ASD) and 16 control participants, ranging in age from 18 to 53 years) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability…

  10. Continuous Flash Suppression: Stimulus Fractionation rather than Integration.

    PubMed

    Moors, Pieter; Hesselmann, Guido; Wagemans, Johan; van Ee, Raymond

    2017-10-01

    Recent studies using continuous flash suppression suggest that invisible stimuli are processed as integrated, semantic entities. We challenge the viability of this account, given recent findings on the neural basis of interocular suppression and replication failures of high-profile CFS studies. We conclude that CFS reveals stimulus fractionation in visual cortex. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. How Stimulus and Task Complexity Affect Monitoring in High-Functioning Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Koolen, Sophieke; Vissers, Constance Th. W. M.; Egger, Jos I. M.; Verhoeven, Ludo

    2014-01-01

    The present study examined whether individuals with autism spectrum disorder (ASD) are able to update and monitor working memory representations of visual input, and whether performance is influenced by stimulus and task complexity. 15 high-functioning adults with ASD and 15 controls were asked to allocate either elements of abstract figures or…

  12. The Effect of Exposure Duration on Visual Character Identification in Single, Whole, and Partial Report

    ERIC Educational Resources Information Center

    Petersen, Anders; Andersen, Tobias S.

    2012-01-01

    The psychometric function of single-letter identification is typically described as a function of stimulus intensity. However, the effect of stimulus exposure duration on letter identification remains poorly described. This is surprising because the effect of exposure duration has played a central role in modeling performance in whole and partial…

  13. Gender interactions in the recognition of emotions and conduct symptoms in adolescents.

    PubMed

    Halász, József; Aspán, Nikoletta; Bozsik, Csilla; Gádoros, Júlia; Inántsy-Pap, Judit

    2014-01-01

    According to literature data, impairment in the recognition of emotions might be related to antisocial developmental pathway. In the present study, the relationship between gender-specific interaction of emotion recognition and conduct symptoms were studied in non-clinical adolescents. After informed consent, 29 boys and 24 girls (13-16 years, 14 ± 0.1 years) participated in the study. The parent version of the Strengths and Difficulties Questionnaire was used to assess behavioral problems. The recognition of basic emotions was analyzed according to both the gender of the participants and the gender of the stimulus faces via the "Facial Expressions of Emotion- Stimuli and Tests". Girls were significantly better than boys in the recognition of disgust, irrespective from the gender of the stimulus faces, albeit both genders were significantly better in the recognition of disgust in the case of male stimulus faces compared to female stimulus faces. Both boys and girls were significantly better in the recognition of sadness in the case of female stimulus faces compared to male stimulus faces. There was no gender effect (neither participant nor stimulus faces) in the recognition of other emotions. Conduct scores in boys were inversely correlated with the recognition of fear in male stimulus faces (R=-0.439, p<0.05) and with overall emotion recognition in male stimulus faces (R=-0.558, p<0.01). In girls, conduct scores were shown a tendency for positive correlation with disgust recognition in female stimulus faces (R=0.376, p<0.07). A gender-specific interaction between the recognition of emotions and antisocial developmentalpathway is suggested.

  14. Individual Alpha Peak Frequency Predicts 10 Hz Flicker Effects on Selective Attention.

    PubMed

    Gulbinaite, Rasa; van Viegen, Tara; Wieling, Martijn; Cohen, Michael X; VanRullen, Rufin

    2017-10-18

    Rhythmic visual stimulation ("flicker") is primarily used to "tag" processing of low-level visual and high-level cognitive phenomena. However, preliminary evidence suggests that flicker may also entrain endogenous brain oscillations, thereby modulating cognitive processes supported by those brain rhythms. Here we tested the interaction between 10 Hz flicker and endogenous alpha-band (∼10 Hz) oscillations during a selective visuospatial attention task. We recorded EEG from human participants (both genders) while they performed a modified Eriksen flanker task in which distractors and targets flickered within (10 Hz) or outside (7.5 or 15 Hz) the alpha band. By using a combination of EEG source separation, time-frequency, and single-trial linear mixed-effects modeling, we demonstrate that 10 Hz flicker interfered with stimulus processing more on incongruent than congruent trials (high vs low selective attention demands). Crucially, the effect of 10 Hz flicker on task performance was predicted by the distance between 10 Hz and individual alpha peak frequency (estimated during the task). Finally, the flicker effect on task performance was more strongly predicted by EEG flicker responses during stimulus processing than during preparation for the upcoming stimulus, suggesting that 10 Hz flicker interfered more with reactive than proactive selective attention. These findings are consistent with our hypothesis that visual flicker entrained endogenous alpha-band networks, which in turn impaired task performance. Our findings also provide novel evidence for frequency-dependent exogenous modulation of cognition that is determined by the correspondence between the exogenous flicker frequency and the endogenous brain rhythms. SIGNIFICANCE STATEMENT Here we provide novel evidence that the interaction between exogenous rhythmic visual stimulation and endogenous brain rhythms can have frequency-specific behavioral effects. We show that alpha-band (10 Hz) flicker impairs stimulus processing in a selective attention task when the stimulus flicker rate matches individual alpha peak frequency. The effect of sensory flicker on task performance was stronger when selective attention demands were high, and was stronger during stimulus processing and response selection compared with the prestimulus anticipatory period. These findings provide novel evidence that frequency-specific sensory flicker affects online attentional processing, and also demonstrate that the correspondence between exogenous and endogenous rhythms is an overlooked prerequisite when testing for frequency-specific cognitive effects of flicker. Copyright © 2017 the authors 0270-6474/17/3710173-12$15.00/0.

  15. Moon illusion and spiral aftereffect: illusions due to the loom-zoom system?

    PubMed

    Hershenson, M

    1982-12-01

    The moon illusion and the spiral aftereffect are illusions in which apparent size and apparent distance vary inversely. Because this relationship is exactly opposite to that predicted by the static size--distance invariance hypothesis, the illusions have been called "paradoxical." The illusions may be understood as products of a loom-zoom system, a hypothetical visual subsystem that, in its normal operation, acts according to its structural constraint, the constancy axiom, to produce perceptions that satisfy the constraints of stimulation, the kinetic size--distance invariance hypothesis. When stimulated by its characteristic stimulus of symmetrical expansion or contraction, the loom-zoom system produces the perception of a rigid object moving in depth. If this system is stimulated by a rotating spiral, a negative motion-aftereffect is produced when rotation ceases. If fixation is then shifted to a fixed-sized disc, the aftereffect process alters perceived distance and the loom-zoom system alters perceived size such that the disc appears to expand and approach or to contract and recede, depending on the direction of rotation of the spiral. If the loom-zoom system is stimulated by a moon-terrain configuration, the equidistance tendency produces a foreshortened perceived distance for the moon as an inverse function of elevation and acts in conjunction with the loom-zoom system to produce the increased perceived size of the moon.

  16. Six-month-old infants' perception of the hollow face illusion: evidence for a general convexity bias.

    PubMed

    Corrow, Sherryse L; Mathison, Jordan; Granrud, Carl E; Yonas, Albert

    2014-01-01

    Corrow, Granrud, Mathison, and Yonas (2011, Perception, 40, 1376-1383) found evidence that 6-month-old infants perceive the hollow face illusion. In the present study we asked whether 6-month-old infants perceive illusory depth reversal for a nonface object and whether infants' perception of the hollow face illusion is affected by mask orientation inversion. In experiment 1 infants viewed a concave bowl, and their reaches were recorded under monocular and binocular viewing conditions. Infants reached to the bowl as if it were convex significantly more often in the monocular than in the binocular viewing condition. These results suggest that infants perceive illusory depth reversal with a nonface stimulus and that the infant visual system has a bias to perceive objects as convex. Infants in experiment 2 viewed a concave face-like mask in upright and inverted orientations. Infants reached to the display as if it were convex more in the monocular than in the binocular condition; however, mask orientation had no effect on reaching. Previous findings that adults' perception of the hollow face illusion is affected by mask orientation inversion have been interpreted as evidence of stored-knowledge influences on perception. However, we found no evidence of such influences in infants, suggesting that their perception of this illusion may not be affected by stored knowledge, and that perceived depth reversal is not face-specific in infants.

  17. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  18. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  19. [One-year longitudinal change in parameters of myopic school children trained by a new accommodative training device--uncorrected visual acuity, refraction, axial length, accommodation, and pupil reaction].

    PubMed

    Watanabe, Kumiko; Hara, Naoto; Kimijima, Masumi; Kotegawa, Yasue; Ohno, Koji; Arimoto, Ako; Mukuno, Kazuo; Hisahara, Satoru; Horie, Hidenori

    2012-10-01

    School children with myopia were trained using a visual stimulation device that generated an isolated blur stimulus on a visual target, with a constant retinal image size and constant brightness. Uncorrected visual acuity, cycloplegic refraction, axial length, dynamic accommodation and papillary reaction were measured to investigate the effectiveness of the training. There were 45 school children with myopia without any other ophthalmic diseases. The mean age of the children was 8.9 +/- 2.0 years (age range; 6-16)and the mean refraction was -1.56 +/- 0.58 D (mean +/- standard deviation). As a visual stimulus, a white ring on a black background with a constant ratio of visual target size to retinal image size, irrespective of the distance, was displayed on a liquid crystal display (LCD), and the LCD was quickly moved from a proximal to a distal position to produce an isolated blur stimulus. Training with this visual stimulus was carried out in the relaxation phase of accommodation. Uncorrected visual acuity, cycloplegic refraction, axial length, dynamic accommodation and pupillary reaction were investigated before training and every 3 months during the training. Of the 45 subjects, 42 (93%) could be trained for 3 consecutive months, 33 (73%) for 6 months, 23 (51%) for 9 months, and 21 (47%) for 12 months. The mean refraction decreased by 0.83 +/- 0.56 D (mean +/- standard deviation) and the mean axial length increased by 0.47 +/- 0.16 mm at 1 year, showing that the training bad some effect in improving the visual acuity. In the tests of the dynamic accommodative responses, the latency of the accommodative-phase decreased from 0.4 +/- 0.2 sec to 0.3 +/- 0.1 sec at 1 year, the gain of the accommodative-phase improved from 69.0 +/- 27.0% to 93.3 +/- 13.4%, the maximum speed of the accommodative-phase increased from 5.1 +/- 2.2 D/sec to 6.8 +/- 2.2 D/sec and the gain of the relaxation-phase significantly improved from 52.1 +/- 26.0% to 72.7 +/- 13.7% (corresponding t-test, p < 0.005). No significant changes were observed in the pupillary reaction. The training device was useful for improving the accommodative functions and accommodative excess, suggesting that it may be able to suppress the progression of low myopia, development of which is known to be strongly influenced by environmental factors.

  20. The Brightness of Colour

    PubMed Central

    Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau

    2009-01-01

    Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398

Top