Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.
Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline
2014-02-01
In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.
Visual search and contextual cueing: differential effects in 10-year-old children and adults.
Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M
2011-02-01
The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.
Lee, Kyung J.; Park, Seong-Beom; Lee, Inah
2014-01-01
Learning theories categorize learning systems into elemental and contextual systems, the former being processed by non-hippocampal regions and the latter being processed in the hippocampus. A set of complex stimuli such as a visual background is often considered a contextual stimulus and simple sensory stimuli such as pure tone and light are considered elemental stimuli. However, this elemental-contextual categorization scheme has only been tested in limited behavioral paradigms and it is largely unknown whether it can be generalized across different learning situations. By requiring rats to respond differently to a common object in association with various types of sensory cues including contextual and elemental stimuli, we tested whether different types of elemental and contextual sensory stimuli depended on the hippocampus to different degrees. In most rats, a surrounding visual background and a tactile stimulus served as contextual (hippocampal dependent) and elemental (non-hippocampal dependent) stimuli, respectively. However, simple tone and light stimuli frequently used as elemental cues in traditional experiments required the hippocampus to varying degrees among rats. Specifically, one group of rats showed a normal contextual bias when both contextual and elemental cues were present. These rats effectively switched to using elemental cues when the hippocampus was inactivated. The other group showed a strong contextual bias (and hippocampal dependence) because these rats were not able to use elemental cues when the hippocampus was unavailable. It is possible that the latter group of rats might have interpreted the elemental cues (light and tone) as background stimuli and depended more on the hippocampus in associating the cues with choice responses. Although exact mechanisms underlying these individual variances are unclear, our findings recommend a caution for adopting a simple sensory stimulus as a non-hippocampal sensory cue only based on the literature. PMID:24982624
Contextual Control by Function and Form of Transfer of Functions
ERIC Educational Resources Information Center
Perkins, David R.; Dougher, Michael J.; Greenway, David E.
2007-01-01
This study investigated conditions leading to contextual control by stimulus topography over transfer of functions. Three 4-member stimulus equivalence classes, each consisting of four (A, B, C, D) topographically distinct visual stimuli, were established for 5 college students. Across classes, designated A stimuli were open-ended linear figures,…
Hierarchical acquisition of visual specificity in spatial contextual cueing.
Lie, Kin-Pou
2015-01-01
Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.
Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.
Flevaris, Anastasia V; Murray, Scott O
2015-09-02
Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.
Contextual modulation and stimulus selectivity in extrastriate cortex.
Krause, Matthew R; Pack, Christopher C
2014-11-01
Contextual modulation is observed throughout the visual system, using techniques ranging from single-neuron recordings to behavioral experiments. Its role in generating feature selectivity within the retina and primary visual cortex has been extensively described in the literature. Here, we describe how similar computations can also elaborate feature selectivity in the extrastriate areas of both the dorsal and ventral streams of the primate visual system. We discuss recent work that makes use of normalization models to test specific roles for contextual modulation in visual cortex function. We suggest that contextual modulation renders neuronal populations more selective for naturalistic stimuli. Specifically, we discuss contextual modulation's role in processing optic flow in areas MT and MST and for representing naturally occurring curvature and contours in areas V4 and IT. We also describe how the circuitry that supports contextual modulation is robust to variations in overall input levels. Finally, we describe how this theory relates to other hypothesized roles for contextual modulation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Prefrontal Cortex Is Critical for Contextual Processing: Evidence from Brain Lesions
ERIC Educational Resources Information Center
Fogelson, Noa; Shah, Mona; Scabini, Donatella; Knight, Robert T.
2009-01-01
We investigated the role of prefrontal cortex (PFC) in local contextual processing using a combined event-related potentials and lesion approach. Local context was defined as the occurrence of a short predictive series of visual stimuli occurring before delivery of a target event. Targets were preceded by either randomized sequences of standards…
Contextual consistency facilitates long-term memory of perceptual detail in barely seen images.
Gronau, Nurit; Shachar, Meytal
2015-08-01
It is long known that contextual information affects memory for an object's identity (e.g., its basic level category), yet it is unclear whether schematic knowledge additionally enhances memory for the precise visual appearance of an item. Here we investigated memory for visual detail of merely glimpsed objects. Participants viewed pairs of contextually related and unrelated stimuli, presented for an extremely brief duration (24 ms, masked). They then performed a forced-choice memory-recognition test for the precise perceptual appearance of 1 of 2 objects within each pair (i.e., the "memory-target" item). In 3 experiments, we show that memory-target stimuli originally appearing within contextually related pairs are remembered better than targets appearing within unrelated pairs. These effects are obtained whether the target is presented at test with its counterpart pair object (i.e., when reiterating the original context at encoding) or whether the target is presented alone, implying that the contextual consistency effects are mediated predominantly by processes occurring during stimulus encoding, rather than during stimulus retrieval. Furthermore, visual detail encoding is improved whether object relations involve implied action or not, suggesting that, contrary to some prior suggestions, action is not a necessary component for object-to-object associative "grouping" processes. Our findings suggest that during a brief glimpse, but not under long viewing conditions, contextual associations may play a critical role in reducing stimulus competition for attention selection and in facilitating rapid encoding of sensory details. Theoretical implications with respect to classic frame theories are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
The Role of Search Speed in the Contextual Cueing of Children's Attention.
Darby, Kevin; Burling, Joseph; Yoshida, Hanako
2014-01-01
The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information.
The Role of Search Speed in the Contextual Cueing of Children’s Attention
Darby, Kevin; Burling, Joseph; Yoshida, Hanako
2013-01-01
The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information. PMID:24505167
Trial-by-trial adjustments in control triggered by incidentally encoded semantic cues.
Blais, Chris; Harris, Michael B; Sinanian, Michael H; Bunge, Silvia A
2015-01-01
Cognitive control mechanisms provide the flexibility to rapidly adapt to contextual demands. These contexts can be defined by top-down goals-but also by bottom-up perceptual factors, such as the location at which a visual stimulus appears. There are now several experiments reporting contextual control effects. Such experiments establish that contexts defined by low-level perceptual cues such as the location of a visual stimulus can lead to context-specific control, suggesting a relatively early focus for cognitive control. The current set of experiments involved a word-word interference task designed to assess whether a high-level cue, the semantic category to which a word belongs, can also facilitate contextual control. Indeed, participants exhibit a larger Flanker effect to items pertaining to a semantic category in which 75% of stimuli are incongruent than in response to items pertaining to a category in which 25% of stimuli are incongruent. Thus, both low-level and high-level stimulus features can affect the bottom-up engagement of cognitive control. The implications for current models of cognitive control are discussed.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
Qian, Ning; Dayan, Peter
2013-01-01
A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217
Murawski, Nathen J; Asok, Arun
2017-01-10
The precise contribution of visual information to contextual fear learning and discrimination has remained elusive. To better understand this contribution, we coupled the context pre-exposure facilitation effect (CPFE) fear conditioning paradigm with presentations of distinct visual scenes displayed on 4 LCD screens surrounding a conditioning chamber. Adult male Long-Evans rats received non-reinforced context pre-exposure on Day 1, an immediate 1.5mA foot shock on Day 2, and a non-reinforced context test on Day 3. Rats were pre-exposed to either digital Context (dCtx) A, dCtx B, a distinct Ctx C, or no context on Day 1. Digital context A and B were identical except for the visual image displayed on the LCD screens. Immediate shock and retention testing occurred in dCtx A. Rats pre-exposed dCtx A showed the CPFE with significantly higher levels of freezing compared to controls. Rats pre-exposed to Context B failed to show the CPFE, with freezing that did not highly differ from controls. These results suggest that visual information contributes to contextual fear learning and that visual components of the context can be manipulated via LCD screens. Our approach offers a simple modification to contextual fear conditioning paradigms whereby the visual features of a context can be manipulated to better understand the factors that contribute to contextual fear discrimination and generalization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Local contextual processing of abstract and meaningful real-life images in professional athletes.
Fogelson, Noa; Fernandez-Del-Olmo, Miguel; Acero, Rafael Martín
2012-05-01
We investigated the effect of abstract versus real-life meaningful images from sports on local contextual processing in two groups of professional athletes. Local context was defined as the occurrence of a short predictive series of stimuli occurring before delivery of a target event. EEG was recorded in 10 professional basketball players and 9 professional athletes of individual sports during three sessions. In each session, a different set of visual stimuli were presented: triangles facing left, up, right, or down; four images of a basketball player throwing a ball; four images of a baseball player pitching a baseball. Stimuli consisted of 15 % targets and 85 % of equal numbers of three types of standards. Recording blocks consisted of targets preceded by randomized sequences of standards and by sequences including a predictive sequence signaling the occurrence of a subsequent target event. Subjects pressed a button in response to targets. In all three sessions, reaction times and peak P3b latencies were shorter for predicted targets compared with random targets, the last most informative stimulus of the predictive sequence induced a robust P3b, and N2 amplitude was larger for random targets compared with predicted targets. P3b and N2 peak amplitudes were larger in the professional basketball group in comparison with professional athletes of individual sports, across the three sessions. The findings of this study suggest that local contextual information is processed similarly for abstract and for meaningful images and that professional basketball players seem to allocate more attentional resources in the processing of these visual stimuli.
Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2012-10-25
In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.
Implicit contextual learning in prodromal and early stage Huntington's disease patients.
van Asselen, Marieke; Almeida, Inês; Júlio, Filipa; Januário, Cristina; Campos, Elzbieta Bobrowicz; Simões, Mário; Castelo-Branco, Miguel
2012-07-01
Huntington's disease (HD) is a genetic neurodegenerative disorder affecting the basal ganglia. These subcortical structures are particularly important for motor functions, response selection and implicit learning. In the current study, we have assessed prodromal and symptomatic HD participants with an implicit contextual learning task that is not based on motor learning, but on a purely visual implicit learning mechanism. We used an implicit contextual learning task in which subjects need to locate a target among several distractors. In half of the trials, the positions of the distractors and target stimuli were repeated. By memorizing this contextual information, attention can be guided faster to the target stimulus. Nine symptomatic HD participants, 16 prodromal HD participants and 22 control subjects were included. We found that the responses of the control subjects were faster for the repeated trials than for the new trials, indicating that their visual search was facilitated when repeated contextual information was present. In contrast, no difference in response times between the repeated and new trials was found for the symptomatic and prodromal HD participants. The results of the current study indicate that both prodromal and symptomatic HD participants are impaired on an implicit contextual learning task.
Pitting temporal against spatial integration in schizophrenic patients.
Herzog, Michael H; Brand, Andreas
2009-06-30
Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.
Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau
2009-01-01
Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398
Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie
2018-01-01
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
Kelly, Debbie M; Cook, Robert G
2003-06-01
Three experiment examined the role of contextual information during line orientation and line position discriminations by pigeons (Columba livia) and humans (Homo sapiens). Experiment 1 tested pigeons' performance with these stimuli in a target localization task using texture displays. Experiments 2 and 3 tested pigeons and humans, respectively, with small and large variations of these stimuli in a same-different task. Humans showed a configural superiority effect when tested with displays constructed from large elements but not when tested with the smaller, more densely packed texture displays. The pigeons, in contrast, exhibited a configural inferiority effect when required to discriminate line orientation, regardless of stimulus size. These contrasting results suggest a species difference in the perceptionand use of features and contextual information in the discrimination of line information.
The transfer of Cfunc contextual control through equivalence relations.
Perez, William F; Fidalgo, Adriana P; Kovac, Roberta; Nico, Yara C
2015-05-01
Derived relational responding is affected by contextual stimuli (Cfunc) that select specific stimulus functions. The present study investigated the transfer of Cfunc contextual control through equivalence relations by evaluating both (a) the maintenance of Cfunc contextual control after the expansion of a relational network, and (b) the establishment of novel contextual stimuli by the transfer of Cfunc contextual control through equivalence relations. Initially, equivalence relations were established and contingencies were arranged so that colors functioned as Cfunc stimuli controlling participants' key-pressing responses in the presence of any stimulus from a three-member equivalence network. To investigate the first research question, the three-member equivalence relations were expanded to five members and the novel members were presented with the Cfunc stimuli in the key-pressing task. To address the second goal of this study, the colors (Cfunc) were established as equivalent to certain line patterns. The transfer of contextual cue function (Cfunc) was tested replacing the colored backgrounds with line patterns in the key-pressing task. Results suggest that the Cfunc contextual control was transferred to novel stimuli that were added to the relational network. In addition, the line patterns indirectly acquired the contextual cue function (Cfunc) initially established for the colored backgrounds. The conceptual and applied implications of Cfunc contextual control are discussed. © Society for the Experimental Analysis of Behavior.
Effects of Perceptual and Contextual Enrichment on Visual Confrontation Naming in Adult Aging
ERIC Educational Resources Information Center
Rogalski, Yvonne; Peelle, Jonathan E.; Reilly, Jamie
2011-01-01
Purpose: The purpose of this study was to determine the effects of enriching line drawings with color/texture and environmental context as a facilitator of naming speed and accuracy in older adults. Method: Twenty young and 23 older adults named high-frequency picture stimuli from the Boston Naming Test (Kaplan, Goodglass, & Weintraub, 2001) under…
Okamoto, Tsuyoshi; Ikezoe, Koji; Tamura, Hiroshi; Watanabe, Masataka; Aihara, Kazuyuki; Fujita, Ichiro
2011-01-01
In the primary visual cortex (V1) of some mammals, columns of neurons with the full range of orientation preferences converge at the center of a pinwheel-like arrangement, the ‘pinwheel center' (PWC). Because a neuron receives abundant inputs from nearby neurons, the neuron's position on the cortical map likely has a significant impact on its responses to the layout of orientations inside and outside its classical receptive field (CRF). To understand the positional specificity of responses, we constructed a computational model based on orientation preference maps in monkey V1 and hypothetical neuronal connections. The model simulations showed that neurons near PWCs displayed weaker but detectable orientation selectivity within their CRFs, and strongly reduced contextual modulation from extra-CRF stimuli, than neurons distant from PWCs. We suggest that neurons near PWCs robustly extract local orientation within their CRF embedded in visual scenes, and that contextual information is processed in regions distant from PWCs. PMID:22355631
Perez, William F; Kovac, Roberta; Nico, Yara C; Caro, Daniel M; Fidalgo, Adriana P; Linares, Ila; de Almeida, João Henrique; de Rose, Júlio C
2017-11-01
According to Relational Frame Theory (RFT) C rel denotes a contextual stimulus that controls a particular type of relational response (sameness, opposition, comparative, temporal, hierarchical etc.) in a given situation. Previous studies suggest that contextual functions may be indirectly acquired via transfer of function. The present study investigated the transfer of C rel contextual control through equivalence relations. Experiment 1 evaluated the transfer of C rel contextual functions for relational responses based on sameness and opposition. Experiment 2 extended these findings by evaluating transfer of function using comparative C rel stimuli. Both experiments followed a similar sequence of phases. First, abstract forms were established as C rel stimuli via multiple exemplar training (Phase 1). The contextual cues were then applied to establish arbitrary relations among nonsense words and to test derived relations (Phase 2). After that, equivalence relations involving the original C rel stimuli and other abstract forms were trained and tested (Phase 3). Transfer of function was evaluated by replacing the directly established C rel stimuli with their equivalent stimuli in the former experimental tasks (Phases 1 and 2). Results from both experiments suggest that C rel contextual control may be extended via equivalence relations, allowing other arbitrarily related stimuli to indirectly acquire C rel functions and regulate behavior by evoking appropriate relational responses in the presence of both previously known and novel stimuli. © 2017 Society for the Experimental Analysis of Behavior.
If it bleeds, it leads: separating threat from mere negativity
Boshyan, Jasmine; Adams, Reginald B.; Mote, Jasmine; Betz, Nicole; Ward, Noreen; Hadjikhani, Nouchine; Bar, Moshe; Barrett, Lisa F.
2015-01-01
Most theories of emotion hold that negative stimuli are threatening and aversive. Yet in everyday experiences some negative sights (e.g. car wrecks) attract curiosity, whereas others repel (e.g. a weapon pointed in our face). To examine the diversity in negative stimuli, we employed four classes of visual images (Direct Threat, Indirect Threat, Merely Negative and Neutral) in a set of behavioral and functional magnetic resonance imaging studies. Participants reliably discriminated between the images, evaluating Direct Threat stimuli most quickly, and Merely Negative images most slowly. Threat images evoked greater and earlier blood oxygen level-dependent (BOLD) activations in the amygdala and periaqueductal gray, structures implicated in representing and responding to the motivational salience of stimuli. Conversely, the Merely Negative images evoked larger BOLD signal in the parahippocampal, retrosplenial, and medial prefrontal cortices, regions which have been implicated in contextual association processing. Ventrolateral as well as medial and lateral orbitofrontal cortices were activated by both threatening and Merely Negative images. In conclusion, negative visual stimuli can repel or attract scrutiny depending on their current threat potential, which is assessed by dynamic shifts in large-scale brain network activity. PMID:24493851
Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A
2012-07-01
Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.
Predispositions to approach and avoid are contextually sensitive and goal dependent.
Bamford, Susan; Ward, Robert
2008-04-01
The authors show that predispositions to approach and avoid do not consist simply of specific motor patterns but are more abstract functions that produce a desired environmental effect. It has been claimed that evaluating a visual stimulus as positive or negative evokes a specific motor response, extending the arm to negative stimuli, and contracting to positive stimuli. The authors showed that a large congruency effect (participants were faster to approach pleasant and avoid unpleasant stimuli, than to approach unpleasant and avoid pleasant stimuli) could be produced on a novel touchscreen paradigm (Experiment 1), and that the congruency effect could be reversed by spatial (Experiment 2) and nonspatial (Experiment 3) response effects. Thus, involuntary approach and avoid response activations are not fixed, but sensitive to context, and are specifically based on the desired goal. (Copyright) 2008 APA.
A unified account of tilt illusions, association fields, and contour detection based on elastica.
Keemink, Sander W; van Rossum, Mark C W
2016-09-01
As expressed in the Gestalt law of good continuation, human perception tends to associate stimuli that form smooth continuations. Contextual modulation in primary visual cortex, in the form of association fields, is believed to play an important role in this process. Yet a unified and principled account of the good continuation law on the neural level is lacking. In this study we introduce a population model of primary visual cortex. Its contextual interactions depend on the elastica curvature energy of the smoothest contour connecting oriented bars. As expected, this model leads to association fields consistent with data. However, in addition the model displays tilt-illusions for stimulus configurations with grating and single bars that closely match psychophysics. Furthermore, the model explains not only pop-out of contours amid a variety of backgrounds, but also pop-out of single targets amid a uniform background. We thus propose that elastica is a unifying principle of the visual cortical network. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Contextual control using a go/no-go procedure with compound abstract stimuli.
Modenesi, Rafael Diego; Debert, Paula
2015-05-01
Contextual control has been described as (1) a five-term contingency, in which the contextual stimulus exerts conditional control over conditional discriminations, and (2) allowing one stimulus to be a member of different equivalence classes without merging them into one. Matching-to-sample is the most commonly employed procedure to produce and study contextual control. The present study evaluated whether the go/no-go procedure with compound stimuli produces equivalence classes that share stimuli. This procedure does not allow the identification of specific stimulus functions (e.g., contextual, conditional, or discriminative functions). If equivalence classes were established with this procedure, then only the latter part of the contextual control definition (2) would be met. Six undergraduate students participated in the present study. In the training phases, responses to AC, BD, and XY compounds with stimuli from the same classes were reinforced, and responses to AC, BD, and XY compounds with stimuli from different classes were not. In addition, responses to X1A1B1, X1A2B2, X2A1B2, and X2A2B1 compounds were reinforced and responses to the other combinations were not. During the tests, the participants had to respond to new combinations of stimuli compounds YCD to indicate the formation of four equivalence classes that share stimuli: X1A1B1Y1C1D1, X1A2B2Y1C2D2, X2A1B2Y2C1D2, and X2A2B1Y2C2D1. Four of the six participants showed the establishment of these classes. These results indicate that establishing contextual stimulus functions is unnecessary to produce equivalence classes that share stimuli. Therefore, these results are inconsistent with the first part of the definition of contextual control. © Society for the Experimental Analysis of Behavior.
Contextual effects on motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2008-08-15
Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.
Circadian timed episodic-like memory - a bee knows what to do when, and also where.
Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu
2007-10-01
This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.
Cycowicz, Yael M; Friedman, David
2007-01-01
The orienting response, the brain's reaction to novel and/or out of context familiar events, is reflected by the novelty P3 of the ERP. Contextually novel events also engender high rates of recognition memory. We examined, under incidental and intentional conditions, the effects of visual symbol familiarity on the novelty P3 recorded during an oddball task and on the parietal episodic memory (EM) effect, an index of recollection. Repetition of familiar, but not unfamiliar, symbols elicited a reduction in the novelty P3. Better recognition performance for the familiar symbols was associated with a robust parietal EM effect, which was absent for the unfamiliar symbols in the incidental task. These data demonstrate that processing of novel events depends on expectation and whether stimuli have preexisting representations in long-term semantic memory.
What is the context of contextual cueing?
Makovski, Tal
2016-12-01
People have a powerful ability to extract regularities from noisy environments and to utilize this knowledge to assist in visual search. Extensive research has shown that this ability, termed contextual cueing (CC), is robust and ubiquitous, but it is still unclear what exactly is the context that is being leaned. Researchers have typically focused on how people learn spatial configuration regularities and have hence used simplified, meaningless search stimuli. Here, observers performed visual search tasks using images of real-world objects. The results revealed that, contrary to past findings, the repetition of either arbitrary spatial information or identity information was not sufficient to produce context learning. Instead, learning was found only when both types of information were repeated together. These results were further replicated in hybrid search tasks, in which subjects looked for multiple target templates. Together, these data suggest that CC is more limited than typically assumed, yet this learning is highly robust.
Gabriel, Florence C.; Szücs, Dénes
2014-01-01
Recent studies have indicated that people have a strong tendency to compare fractions based on constituent numerators or denominators. This is called componential processing. This study explored whether componential processing was preferred in tasks involving high stimuli variability and high contextual interference, when fractions could be compared based either on the holistic values of fractions or on their denominators. Here, stimuli variability referred to the fact that fractions were not monotonous but diversiform. Contextual interference referred to the fact that the processing of fractions was interfered by other stimuli. To our ends, three tasks were used. In Task 1, participants compared a standard fraction 1/5 to unit fractions. This task was used as a low stimuli variability and low contextual interference task. In Task 2 stimuli variability was increased by mixing unit and non-unit fractions. In Task 3, high contextual interference was created by incorporating decimals into fractions. The RT results showed that the processing patterns of fractions were very similar for adults and children. In task 1 and task 3, only componential processing was utilzied. In contrast, both holistic processing and componential processing were utilized in task 2. These results suggest that, if individuals are presented with the opportunity to perform componential processing, both adults and children will tend to do so, even if they are faced with high variability of fractions or high contextual interference. PMID:25249995
Zhang, Li; Fang, Qiaochu; Gabriel, Florence C; Szücs, Dénes
2014-01-01
Recent studies have indicated that people have a strong tendency to compare fractions based on constituent numerators or denominators. This is called componential processing. This study explored whether componential processing was preferred in tasks involving high stimuli variability and high contextual interference, when fractions could be compared based either on the holistic values of fractions or on their denominators. Here, stimuli variability referred to the fact that fractions were not monotonous but diversiform. Contextual interference referred to the fact that the processing of fractions was interfered by other stimuli. To our ends, three tasks were used. In Task 1, participants compared a standard fraction 1/5 to unit fractions. This task was used as a low stimuli variability and low contextual interference task. In Task 2 stimuli variability was increased by mixing unit and non-unit fractions. In Task 3, high contextual interference was created by incorporating decimals into fractions. The RT results showed that the processing patterns of fractions were very similar for adults and children. In task 1 and task 3, only componential processing was utilzied. In contrast, both holistic processing and componential processing were utilized in task 2. These results suggest that, if individuals are presented with the opportunity to perform componential processing, both adults and children will tend to do so, even if they are faced with high variability of fractions or high contextual interference.
Exploring the parahippocampal cortex response to high and low spatial frequency spaces.
Zeidman, Peter; Mullally, Sinéad L; Schwarzkopf, Dietrich Samuel; Maguire, Eleanor A
2012-05-30
The posterior parahippocampal cortex (PHC) supports a range of cognitive functions, in particular scene processing. However, it has recently been suggested that PHC engagement during functional MRI simply reflects the representation of three-dimensional local space. If so, PHC should respond to space in the absence of scenes, geometric layout, objects or contextual associations. It has also been reported that PHC activation may be influenced by low-level visual properties of stimuli such as spatial frequency. Here, we tested whether PHC was responsive to the mere sense of space in highly simplified stimuli, and whether this was affected by their spatial frequency distribution. Participants were scanned using functional MRI while viewing depictions of simple three-dimensional space, and matched control stimuli that did not depict a space. Half the stimuli were low-pass filtered to ascertain the impact of spatial frequency. We observed a significant interaction between space and spatial frequency in bilateral PHC. Specifically, stimuli depicting space (more than nonspatial stimuli) engaged the right PHC when they featured high spatial frequencies. In contrast, the interaction in the left PHC did not show a preferential response to space. We conclude that a simple depiction of three-dimensional space that is devoid of objects, scene layouts or contextual associations is sufficient to robustly engage the right PHC, at least when high spatial frequencies are present. We suggest that coding for the presence of space may be a core function of PHC, and could explain its engagement in a range of tasks, including scene processing, where space is always present.
If it bleeds, it leads: separating threat from mere negativity.
Kveraga, Kestutis; Boshyan, Jasmine; Adams, Reginald B; Mote, Jasmine; Betz, Nicole; Ward, Noreen; Hadjikhani, Nouchine; Bar, Moshe; Barrett, Lisa F
2015-01-01
Most theories of emotion hold that negative stimuli are threatening and aversive. Yet in everyday experiences some negative sights (e.g. car wrecks) attract curiosity, whereas others repel (e.g. a weapon pointed in our face). To examine the diversity in negative stimuli, we employed four classes of visual images (Direct Threat, Indirect Threat, Merely Negative and Neutral) in a set of behavioral and functional magnetic resonance imaging studies. Participants reliably discriminated between the images, evaluating Direct Threat stimuli most quickly, and Merely Negative images most slowly. Threat images evoked greater and earlier blood oxygen level-dependent (BOLD) activations in the amygdala and periaqueductal gray, structures implicated in representing and responding to the motivational salience of stimuli. Conversely, the Merely Negative images evoked larger BOLD signal in the parahippocampal, retrosplenial, and medial prefrontal cortices, regions which have been implicated in contextual association processing. Ventrolateral as well as medial and lateral orbitofrontal cortices were activated by both threatening and Merely Negative images. In conclusion, negative visual stimuli can repel or attract scrutiny depending on their current threat potential, which is assessed by dynamic shifts in large-scale brain network activity. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Mirror me: Imitative responses in adults with autism.
Schunke, Odette; Schöttle, Daniel; Vettorazzi, Eik; Brandt, Valerie; Kahl, Ursula; Bäumer, Tobias; Ganos, Christos; David, Nicole; Peiker, Ina; Engel, Andreas K; Brass, Marcel; Münchau, Alexander
2016-02-01
Dysfunctions of the human mirror neuron system have been postulated to underlie some deficits in autism spectrum disorders including poor imitative performance and impaired social skills. Using three reaction time experiments addressing mirror neuron system functions under simple and complex conditions, we examined 20 adult autism spectrum disorder participants and 20 healthy controls matched for age, gender and education. Participants performed simple finger-lifting movements in response to (1) biological finger and non-biological dot movement stimuli, (2) acoustic stimuli and (3) combined visual-acoustic stimuli with different contextual (compatible/incompatible) and temporal (simultaneous/asynchronous) relation. Mixed model analyses revealed slower reaction times in autism spectrum disorder. Both groups responded faster to biological compared to non-biological stimuli (Experiment 1) implying intact processing advantage for biological stimuli in autism spectrum disorder. In Experiment 3, both groups had similar 'interference effects' when stimuli were presented simultaneously. However, autism spectrum disorder participants had abnormally slow responses particularly when incompatible stimuli were presented consecutively. Our results suggest imitative control deficits rather than global imitative system impairments. © The Author(s) 2015.
Culture-related differences in default network activity during visuo-spatial judgments.
Goh, Joshua O S; Hebrank, Andrew C; Sutton, Bradley P; Chee, Michael W L; Sim, Sam K Y; Park, Denise C
2013-02-01
Studies on culture-related differences in cognition have shown that Westerners attend more to object-related information, whereas East Asians attend more to contextual information. Neural correlates of these different culture-related visual processing styles have been reported in the ventral-visual and fronto-parietal regions. We conducted an fMRI study of East Asians and Westerners on a visuospatial judgment task that involved relative, contextual judgments, which are typically more challenging for Westerners. Participants judged the relative distances between a dot and a line in visual stimuli during task blocks and alternated finger presses during control blocks. Behaviorally, East Asians responded faster than Westerners, reflecting greater ease of the task for East Asians. In response to the greater task difficulty, Westerners showed greater neural engagement compared to East Asians in frontal, parietal, and occipital areas. Moreover, Westerners also showed greater suppression of the default network-a brain network that is suppressed under condition of high cognitive challenge. This study demonstrates for the first time that cultural differences in visual attention during a cognitive task are manifested both by differences in activation in fronto-parietal regions as well as suppression in default regions.
McDonald, J Scott; Seymour, Kiley J; Schira, Mark M; Spehar, Branka; Clifford, Colin W G
2009-05-01
The responses of orientation-selective neurons in primate visual cortex can be profoundly affected by the presence and orientation of stimuli falling outside the classical receptive field. Our perception of the orientation of a line or grating also depends upon the context in which it is presented. For example, the perceived orientation of a grating embedded in a surround tends to be repelled from the predominant orientation of the surround. Here, we used fMRI to investigate the basis of orientation-specific surround effects in five functionally-defined regions of visual cortex: V1, V2, V3, V3A/LO1 and hV4. Test stimuli were luminance-modulated and isoluminant gratings that produced responses similar in magnitude. Less BOLD activation was evident in response to gratings with parallel versus orthogonal surrounds across all the regions of visual cortex investigated. When an isoluminant test grating was surrounded by a luminance-modulated inducer, the degree of orientation-specific contextual modulation was no larger for extrastriate areas than for V1, suggesting that the observed effects might originate entirely in V1. However, more orientation-specific modulation was evident in extrastriate cortex when both test and inducer were luminance-modulated gratings than when the test was isoluminant; this difference was significant in area V3. We suggest that the pattern of results in extrastriate cortex may reflect a refinement of the orientation-selectivity of surround suppression specific to the colour of the surround or, alternatively, processes underlying the segmentation of test and inducer by spatial phase or orientation when no colour cue is available.
Electrophysiological indices of surround suppression in humans
Vanegas, M. Isabel; Blangero, Annabelle
2014-01-01
Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Furthermore, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. PMID:25411464
Specificity and timescales of cortical adaptation as inferences about natural movie statistics.
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-10-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.
Specificity and timescales of cortical adaptation as inferences about natural movie statistics
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-01-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416
The influence of surround suppression on adaptation effects in primary visual cortex
Wissig, Stephanie C.
2012-01-01
Adaptation, the prolonged presentation of stimuli, has been used to probe mechanisms of visual processing in physiological, imaging, and perceptual studies. Previous neurophysiological studies have measured adaptation effects by using stimuli tailored to evoke robust responses in individual neurons. This approach provides an incomplete view of how an adapter alters the representation of sensory stimuli by a population of neurons with diverse functional properties. We implanted microelectrode arrays in primary visual cortex (V1) of macaque monkeys and measured orientation tuning and contrast sensitivity in populations of neurons before and after prolonged adaptation. Whereas previous studies in V1 have reported that adaptation causes stimulus-specific suppression of responsivity and repulsive shifts in tuning preference, we have found that adaptation can also lead to response facilitation and shifts in tuning toward the adapter. To explain this range of effects, we have proposed and tested a simple model that employs stimulus-specific suppression in both the receptive field and the spatial surround. The predicted effects on tuning depend on the relative drive provided by the adapter to these two receptive field components. Our data reveal that adaptation can have a much richer repertoire of effects on neuronal responsivity and tuning than previously considered and suggest an intimate mechanistic relationship between spatial and temporal contextual effects. PMID:22423001
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences
Noh, Soo Rim; Isaacowitz, Derek M.
2015-01-01
Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774
The effects of varying contextual demands on age-related positive gaze preferences.
Noh, Soo Rim; Isaacowitz, Derek M
2015-06-01
Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.
Lee, Kyung Hwa; Siegle, Greg J
2014-11-01
This study examined the extent to which emotional face stimuli differ from the neural reactivity associated with more ecological contextually augmented stimuli. Participants were scanned when they viewed contextually rich pictures depicting both emotional faces and context, and pictures of emotional faces presented alone. Emotional faces alone were more strongly associated with brain activity in paralimbic and social information processing regions, whereas emotional faces augmented by context were associated with increased and sustained activity in regions potentially representing increased complexity and subjective emotional experience. Furthermore, context effects were modulated by emotional intensity and valence. These findings suggest that cortical elaboration that is apparent in contextually augmented stimuli may be missed in studies of emotional faces alone, whereas emotional faces may more selectively recruit limbic reactivity. Copyright © 2014 Society for Psychophysiological Research.
Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina
2013-01-01
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.
Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina
2013-01-01
A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509
Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2.
Zarella, Mark D; Ts'o, Daniel Y
2017-01-01
Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure-ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure-ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure-ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization.
Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2
Zarella, Mark D; Ts’o, Daniel Y
2017-01-01
Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure–ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure–ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure–ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization. PMID:28761385
The effect of contextual sound cues on visual fidelity perception.
Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam
2014-01-01
Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.
Kafkas, Alexandros; Montaldi, Daniela
2015-01-01
The role of contextual expectation in processing familiar and novel stimuli was investigated in a series of experiments combining eye tracking, functional magnetic resonance imaging, and behavioral methods. An experimental paradigm emphasizing either familiarity or novelty detection at retrieval was used. The detection of unexpected familiar and novel stimuli, which were characterized by lower probability, engaged activity in midbrain and striatal structures. Specifically, detecting unexpected novel stimuli, relative to expected novel stimuli, produced greater activity in the substantia nigra/ventral tegmental area (SN/VTA), whereas the detection of unexpected familiar, relative to expected, familiar stimuli, elicited activity in the striatum/globus pallidus (GP). An effective connectivity analysis showed greater functional coupling between these two seed areas (GP and SN/VTA) and the hippocampus, for unexpected than for expected stimuli. Within this network of midbrain/striatal–hippocampal interactions two pathways are apparent; the direct SN–hippocampal pathway sensitive to unexpected novelty and the perirhinal–GP–hippocampal pathway sensitive to unexpected familiarity. In addition, increased eye fixations and pupil dilations also accompanied the detection of unexpected relative to expected familiar and novel stimuli, reflecting autonomic activity triggered by the functioning of these two pathways. Finally, subsequent memory for unexpected, relative to expected, familiar, and novel stimuli was characterized by enhanced recollection, but not familiarity, accuracy. Taken together, these findings suggest that a hippocampal–midbrain network, characterized by two distinct pathways, mediates encoding facilitation and most critically, that this facilitation is driven by contextual novelty, rather than by the absolute novelty of a stimulus. This contextually sensitive neural mechanism appears to elicit increased exploratory behavior, leading subsequently to greater recollection of the unexpected stimulus. © 2015 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:25708843
Kremkow, Jens; Perrinet, Laurent U.; Monier, Cyril; Alonso, Jose-Manuel; Aertsen, Ad; Frégnac, Yves; Masson, Guillaume S.
2016-01-01
Neurons in the primary visual cortex are known for responding vigorously but with high variability to classical stimuli such as drifting bars or gratings. By contrast, natural scenes are encoded more efficiently by sparse and temporal precise spiking responses. We used a conductance-based model of the visual system in higher mammals to investigate how two specific features of the thalamo-cortical pathway, namely push-pull receptive field organization and fast synaptic depression, can contribute to this contextual reshaping of V1 responses. By comparing cortical dynamics evoked respectively by natural vs. artificial stimuli in a comprehensive parametric space analysis, we demonstrate that the reliability and sparseness of the spiking responses during natural vision is not a mere consequence of the increased bandwidth in the sensory input spectrum. Rather, it results from the combined impacts of fast synaptic depression and push-pull inhibition, the later acting for natural scenes as a form of “effective” feed-forward inhibition as demonstrated in other sensory systems. Thus, the combination of feedforward-like inhibition with fast thalamo-cortical synaptic depression by simple cells receiving a direct structured input from thalamus composes a generic computational mechanism for generating a sparse and reliable encoding of natural sensory events. PMID:27242445
Outside influence: The sense of agency takes into account what is in our surroundings.
Hon, Nicholas; Seow, Yin-Yi; Pereira, Don
2018-05-01
We are quite capable of distinguishing those outcomes we cause from those we do not. This ability to sense self-agency is thought to be produced by a comparison between a predictive representation of an outcome and the actual outcome that occurs. It is unclear, though, specifically what types of information can be entered into agency computations. Here, we demonstrate that information from non-target stimuli (stimuli that are not directly acted upon) incidentally present in our surroundings can influence predictions of outcomes, consequently modulating the sense of agency over clearly-defined target outcomes (those that occur to acted-upon stimuli). This provides the first evidence that our sense of agency is contextualized with respect to what is in our immediate visual environment. Furthermore, our data suggest that agency computations, instead of just a single comparison, may involve comparisons performed in stages, with different stages involving different types/classes of information. A model of such multi-stage comparisons is described. Copyright © 2018 Elsevier B.V. All rights reserved.
Contextual Modulation is Related to Efficiency in a Spiking Network Model of Visual Cortex.
Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo; Vanni, Simo
2015-01-01
In the visual cortex, stimuli outside the classical receptive field (CRF) modulate the neural firing rate, without driving the neuron by themselves. In the primary visual cortex (V1), such contextual modulation can be parametrized with an area summation function (ASF): increasing stimulus size causes first an increase and then a decrease of firing rate before reaching an asymptote. Earlier work has reported increase of sparseness when CRF stimulation is extended to its surroundings. However, there has been no clear connection between the ASF and network efficiency. Here we aimed to investigate possible link between ASF and network efficiency. In this study, we simulated the responses of a biomimetic spiking neural network model of the visual cortex to a set of natural images. We varied the network parameters, and compared the V1 excitatory neuron spike responses to the corresponding responses predicted from earlier single neuron data from primate visual cortex. The network efficiency was quantified with firing rate (which has direct association to neural energy consumption), entropy per spike and population sparseness. All three measures together provided a clear association between the network efficiency and the ASF. The association was clear when varying the horizontal connectivity within V1, which influenced both the efficiency and the distance to ASF, DAS. Given the limitations of our biophysical model, this association is qualitative, but nevertheless suggests that an ASF-like receptive field structure can cause efficient population response.
Differential contribution of early visual areas to the perceptual process of contour processing.
Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A
2004-04-01
We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.
The time course of attentional deployment in contextual cueing.
Jiang, Yuhong V; Sigstad, Heather M; Swallow, Khena M
2013-04-01
The time course of attention is a major characteristic on which different types of attention diverge. In addition to explicit goals and salient stimuli, spatial attention is influenced by past experience. In contextual cueing, behaviorally relevant stimuli are more quickly found when they appear in a spatial context that has previously been encountered than when they appear in a new context. In this study, we investigated the time that it takes for contextual cueing to develop following the onset of search layout cues. In three experiments, participants searched for a T target in an array of Ls. Each array was consistently associated with a single target location. In a testing phase, we manipulated the stimulus onset asynchrony (SOA) between the repeated spatial layout and the search display. Contextual cueing was equivalent for a wide range of SOAs between 0 and 1,000 ms. The lack of an increase in contextual cueing with increasing cue durations suggests that as an implicit learning mechanism, contextual cueing cannot be effectively used until search begins.
Glenn, Daniel E; Risbrough, Victoria B; Simmons, Alan N; Acheson, Dean T; Stout, Daniel M
2017-10-21
There has been a great deal of recent interest in human models of contextual fear learning, particularly due to the use of such paradigms for investigating neural mechanisms related to the etiology of posttraumatic stress disorder. However, the construct of "context" in fear conditioning research is broad, and the operational definitions and methods used to investigate contextual fear learning in humans are wide ranging and lack specificity, making it difficult to interpret findings about neural activity. Here we will review neuroimaging studies of contextual fear acquisition in humans. We will discuss the methodology associated with four broad categories of how contextual fear learning is manipulated in imaging studies (colored backgrounds, static picture backgrounds, virtual reality, and configural stimuli) and highlight findings for the primary neural circuitry involved in each paradigm. Additionally, we will offer methodological recommendations for human studies of contextual fear acquisition, including using stimuli that distinguish configural learning from discrete cue associations and clarifying how context is experimentally operationalized.
Contextual cueing by global features
Kunar, Melina A.; Flusberg, Stephen J.; Wolfe, Jeremy M.
2008-01-01
In visual search tasks, attention can be guided to a target item, appearing amidst distractors, on the basis of simple features (e.g. find the red letter among green). Chun and Jiang’s (1998) “contextual cueing” effect shows that RTs are also speeded if the spatial configuration of items in a scene is repeated over time. In these studies we ask if global properties of the scene can speed search (e.g. if the display is mostly red, then the target is at location X). In Experiment 1a, the overall background color of the display predicted the target location. Here the predictive color could appear 0, 400 or 800 msec in advance of the search array. Mean RTs are faster in predictive than in non-predictive conditions. However, there is little improvement in search slopes. The global color cue did not improve search efficiency. Experiments 1b-1f replicate this effect using different predictive properties (e.g. background orientation/texture, stimuli color etc.). The results show a strong RT effect of predictive background but (at best) only a weak improvement in search efficiency. A strong improvement in efficiency was found, however, when the informative background was presented 1500 msec prior to the onset of the search stimuli and when observers were given explicit instructions to use the cue (Experiment 2). PMID:17355043
Effects of Perceptual and Contextual Enrichment on Visual Confrontation Naming in Adult Aging
Rogalski, Yvonne; Peelle, Jonathan E.; Reilly, Jamie
2013-01-01
Purpose The purpose of this study was to determine the effects of enriching line drawings with color/texture and environmental context as a facilitator of naming speed and accuracy in older adults. Method Twenty young and 23 older adults named high-frequency picture stimuli from the Boston Naming Test (Kaplan, Goodglass, & Weintraub, 2001) under three conditions: (a) black-and-white items, (b) colorized-texturized items, and (c) scene-primed colored items (e.g., “hammock” preceded 1,000 ms by a backyard scene). Results With respect to speeded naming latencies, mixed-model analyses of variance revealed that young adults did not benefit from colorization-texturization but did show scene-priming effects. In contrast, older adults failed to show facilitation effects from either colorized-texturized or scene-primed items. Moreover, older adults were consistently slower to initiate naming than were their younger counterparts across all conditions. Conclusions Perceptual and contextual enrichment of sparse line drawings does not appear to facilitate visual confrontation naming in older adults, whereas younger adults do tend to show benefits of scene priming. We interpret these findings as generally supportive of a processing speed account of age-related object picture-naming difficulty. PMID:21498581
Rimmele, Ulrike; Davachi, Lila; Petrov, Radoslav; Dougal, Sonya; Phelps, Elizabeth A.
2013-01-01
Emotion strengthens the subjective experience of recollection. However, these vivid and confidently remembered emotional memories may not necessarily be more accurate. We investigated whether the subjective sense of recollection for negative stimuli is coupled with enhanced memory accuracy for contextual details using the remember/know paradigm. Our results indicate a double-dissociation between the subjective feeling of remembering, and the objective memory accuracy for details of negative and neutral scenes. “Remember” judgments were boosted for negative relative to neutral scenes. In contrast, memory for contextual details and associative binding was worse for negative compared to neutral scenes given a “remember” response. These findings show that the enhanced subjective recollective experience for negative stimuli does not reliably indicate greater objective recollection, at least of the details tested, and thus may be driven by a different mechanism than the subjective recollective experience for neutral stimuli. PMID:21668106
Santos, Thays Brenner; Kramer-Soares, Juliana Carlota; Favaro, Vanessa Manchim; Oliveira, Maria Gabriela Menezes
2017-10-01
Time plays an important role in conditioning, it is not only possible to associate stimuli with events that overlap, as in delay fear conditioning, but it is also possible to associate stimuli that are discontinuous in time, as shown in trace conditioning for a discrete stimuli. The environment itself can be a powerful conditioned stimulus (CS) and be associated to unconditioned stimulus (US). Thus, the aim of the present study was to determine the parameters in which contextual fear conditioning occurs by the maintenance of a contextual representation over short and long time intervals. The results showed that a contextual representation can be maintained and associated after 5s, even in the absence of a 15s re-exposure to the training context before US delivery. The same effect was not observed with a 24h interval of discontinuity. Furthermore, optimal conditioned response with a 5s interval is produced only when the contexts (of pre-exposure and shock) match. As the pre-limbic cortex (PL) is necessary for the maintenance of a continuous representation of a stimulus, the involvement of the PL in this temporal and contextual processing was investigated. The reversible inactivation of the PL by muscimol infusion impaired the acquisition of contextual fear conditioning with a 5s interval, but not with a 24h interval, and did not impair delay fear conditioning. The data provided evidence that short and long intervals of discontinuity have different mechanisms, thus contributing to a better understanding of PL involvement in contextual fear conditioning and providing a model that considers both temporal and contextual factors in fear conditioning. Copyright © 2017 Elsevier Inc. All rights reserved.
Value-Driven Attentional Capture is Modulated by Spatial Context
Anderson, Brian A.
2014-01-01
When stimuli are associated with reward outcome, their visual features acquire high attentional priority such that stimuli possessing those features involuntarily capture attention. Whether a particular feature is predictive of reward, however, will vary with a number of contextual factors. One such factor is spatial location: for example, red berries are likely to be found in low-lying bushes, whereas yellow bananas are likely to be found on treetops. In the present study, I explore whether the attentional priority afforded to reward-associated features is modulated by such location-based contingencies. The results demonstrate that when a stimulus feature is associated with a reward outcome in one spatial location but not another, attentional capture by that feature is selective to when it appears in the rewarded location. This finding provides insight into how reward learning effectively modulates attention in an environment with complex stimulus–reward contingencies, thereby supporting efficient foraging. PMID:26069450
Parsons, Thomas D.
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869
Parsons, Thomas D
2015-01-01
An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.
Lambert, Anthony J; Wootton, Adrienne
2017-08-01
Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Context controls access to working and reference memory in the pigeon (Columba livia).
Roberts, William A; Macpherson, Krista; Strang, Caroline
2016-01-01
The interaction between working and reference memory systems was examined under conditions in which salient contextual cues were presented during memory retrieval. Ambient colored lights (red or green) bathed the operant chamber during the presentation of comparison stimuli in delayed matching-to-sample training (working memory) and during the presentation of the comparison stimuli as S+ and S- cues in discrimination training (reference memory). Strong competition between memory systems appeared when the same contextual cue appeared during working and reference memory training. When different contextual cues were used, however, working memory was completely protected from reference memory interference. © 2016 Society for the Experimental Analysis of Behavior.
Object based implicit contextual learning: a study of eye movements.
van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel
2011-02-01
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.
Visual Displays and Contextual Presentations in Computer-Based Instruction.
ERIC Educational Resources Information Center
Park, Ok-choon
1998-01-01
Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…
Emotional valence and contextual affordances flexibly shape approach-avoidance movements
Saraiva, Ana Carolina; Schüür, Friederike; Bestmann, Sven
2013-01-01
Behavior is influenced by the emotional content—or valence—of stimuli in our environment. Positive stimuli facilitate approach, whereas negative stimuli facilitate defensive actions such as avoidance (flight) and attack (fight). Facilitation of approach or avoidance movements may also be influenced by whether it is the self that moves relative to a stimulus (self-reference) or the stimulus that moves relative to the self (object-reference), adding flexibility and context-dependence to behavior. Alternatively, facilitation of approach avoidance movements may happen in a pre-defined and muscle-specific way, whereby arm flexion is faster to approach positive (e.g., flexing the arm brings a stimulus closer) and arm extension faster to avoid negative stimuli (e.g., extending the arm moves the stimulus away). While this allows for relatively fast responses, it may compromise the flexibility offered by contextual influences. Here we asked under which conditions approach-avoidance actions are influenced by contextual factors (i.e., reference-frame). We manipulated the reference-frame in which actions occurred by asking participants to move a symbolic manikin (representing the self) toward or away from a positive or negative stimulus, and move a stimulus toward or away from the manikin. We also controlled for the type of movements used to approach or avoid in each reference. We show that the reference-frame influences approach-avoidance actions to emotional stimuli, but additionally we find muscle-specificity for negative stimuli in self-reference contexts. We speculate this muscle-specificity may be a fast and adaptive response to threatening stimuli. Our results confirm that approach-avoidance behavior is flexible and reference-frame dependent, but can be muscle-specific depending on the context and valence of the stimulus. Reference-frame and stimulus-evaluation are key factors in guiding approach-avoidance behavior toward emotional stimuli in our environment. PMID:24379794
Evidence for Hippocampus-Dependent Contextual Learning at Postnatal Day 17 in the Rat
ERIC Educational Resources Information Center
Foster, Jennifer A.; Burman, Michael A.
2010-01-01
Long-term memory for fear of an environment (contextual fear conditioning) emerges later in development (postnatal day; PD 23) than long-term memory for fear of discrete stimuli (PD 17). As contextual, but not explicit cue, fear conditioning relies on the hippocampus; this has been interpreted as evidence that the hippocampus is not fully…
Cruse, Damian; Wilding, Edward L
2011-06-01
In a pair of recent studies, frontally distributed event-related potential (ERP) indices of two distinct post-retrieval processes were identified. It has been proposed that one of these processes operates over any kinds of task relevant information in service of task demands, while the other operates selectively over recovered contextual (episodic) information. The experiment described here was designed to test this account, by requiring retrieval of different kinds of contextual information to that required in previous relevant studies. Participants heard words spoken in either a male or female voice at study and ERPs were acquired at test where all words were presented visually. Half of the test words had been spoken at study. Participants first made an old/new judgment, distinguishing via key press between studied and unstudied words. For words judged 'old', participants indicated the voice in which the word had been spoken at study, and their confidence (high/low) in the voice judgment. There was evidence for only one of the two frontal old/new effects that had been identified in the previous studies. One possibility is that the ERP effect in previous studies that was tied specifically to recollection reflects processes operating over only some kinds of contextual information. An alternative is that the index reflects processes that are engaged primarily when there are few contextual features that distinguish between studied stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sleep-Effects on Implicit and Explicit Memory in Repeated Visual Search
Assumpcao, Leonardo; Gais, Steffen
2013-01-01
In repeated visual search tasks, facilitation of reaction times (RTs) due to repetition of the spatial arrangement of items occurs independently of RT facilitation due to improvements in general task performance. Whereas the latter represents typical procedural learning, the former is a kind of implicit memory that depends on the medial temporal lobe (MTL) memory system and is impaired in patients with amnesia. A third type of memory that develops during visual search is the observers’ explicit knowledge of repeated displays. Here, we used a visual search task to investigate whether procedural memory, implicit contextual cueing, and explicit knowledge of repeated configurations, which all arise independently from the same set of stimuli, are influenced by sleep. Observers participated in two experimental sessions, separated by either a nap or a controlled rest period. In each of the two sessions, they performed a visual search task in combination with an explicit recognition task. We found that (1) across sessions, MTL-independent procedural learning was more pronounced for the nap than rest group. This confirms earlier findings, albeit from different motor and perceptual tasks, showing that procedural memory can benefit from sleep. (2) Likewise, the sleep group compared with the rest group showed enhanced context-dependent configural learning in the second session. This is a novel finding, indicating that the MTL-dependent, implicit memory underlying contextual cueing is also sleep-dependent. (3) By contrast, sleep and wake groups displayed equivalent improvements in explicit recognition memory in the second session. Overall, the current study shows that sleep affects MTL-dependent as well as MTL-independent memory, but it affects different, albeit simultaneously acquired, forms of MTL-dependent memory differentially. PMID:23936363
Process dissociation between contextual retrieval and item recognition.
Weis, Susanne; Specht, Karsten; Klaver, Peter; Tendolkar, Indira; Willmes, Klaus; Ruhlmann, Jürgen; Elger, Christian E; Fernández, Guillén
2004-12-22
We employed a source memory task in an event related fMRI study to dissociate MTL processes associated with either contextual retrieval or item recognition. To introduce context during study, stimuli (photographs of buildings and natural landscapes) were transformed into one of four single-color-scales: red, blue, yellow, or green. In the subsequent old/new recognition memory test, all stimuli were presented as gray scale photographs, and old-responses were followed by a four-alternative source judgment referring to the color in which the stimulus was presented during study. Our results suggest a clear-cut process dissociation within the human MTL. While an activity increase accompanies successful retrieval of contextual information, an activity decrease provides a familiarity signal that is sufficient for successful item recognition.
Strong Recurrent Networks Compute the Orientation-Tuning of Surround Modulation in Primate V1
Shushruth, S.; Mangapathy, Pradeep; Ichida, Jennifer M.; Bressloff, Paul C.; Schwabe, Lars; Angelucci, Alessandra
2012-01-01
In macaque primary visual cortex (V1) neuronal responses to stimuli inside the receptive field (RF) are modulated by stimuli in the RF surround. This modulation is orientation-specific. Previous studies suggested that for some cells this specificity may not be fixed, but changes with the stimulus orientation presented to the RF. We demonstrate, in recording studies, that this tuning behavior is instead highly prevalent in V1 and, in theoretical work, that it arises only if V1 operates in a regime of strong local recurrence. Strongest surround suppression occurs when the stimuli in the RF and the surround are iso-oriented, and strongest facilitation when the stimuli are cross-oriented. This is the case even when the RF is sub-optimally activated by a stimulus of non-preferred orientation, but only if this stimulus can activate the cell when presented alone. This tuning behavior emerges from the interaction of lateral inhibition (via the surround pathways), which is tuned to the RF’s preferred orientation, with weakly-tuned, but strong, local recurrent connections, causing maximal withdrawal of recurrent excitation at the feedforward input orientation. Thus, horizontal and feedback modulation of strong recurrent circuits allows the tuning of contextual effects to change with changing feedforward inputs. PMID:22219292
Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon
2012-01-01
Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.
Altering attentional control settings causes persistent biases of visual attention.
Knight, Helen C; Smith, Daniel T; Knight, David C; Ellison, Amanda
2016-01-01
Attentional control settings have an important role in guiding visual behaviour. Previous work within cognitive psychology has found that the deployment of general attentional control settings can be modulated by training. However, research has not yet established whether long-term modifications of one particular type of attentional control setting can be induced. To address this, we investigated persistent alterations to feature search mode, also known as an attentional bias, towards an arbitrary stimulus in healthy participants. Subjects were biased towards the colour green by an information sheet. Attentional bias was assessed using a change detection task. After an interval of either 1 or 2 weeks, participants were then retested on the same change detection task, tested on a different change detection task where colour was irrelevant, or were biased towards an alternative colour. One experiment included trials in which the distractor stimuli (but never the target stimuli) were green. The key finding was that green stimuli in the second task attracted attention, despite this impairing task performance. Furthermore, inducing a second attentional bias did not override the initial bias toward green objects. The attentional bias also persisted for at least two weeks. It is argued that this persistent attentional bias is mediated by a chronic change to participants' attentional control settings, which is aided by long-term representations involving contextual cueing. We speculate that similar changes to attentional control settings and continuous cueing may relate to attentional biases observed in psychopathologies. Targeting these biases may be a productive approach to treatment.
Contextual effects on perceived contrast: figure-ground assignment and orientation contrast.
Self, Matthew W; Mookhoek, Aart; Tjalma, Nienke; Roelfsema, Pieter R
2015-02-02
Figure-ground segregation is an important step in the path leading to object recognition. The visual system segregates objects ('figures') in the visual scene from their backgrounds ('ground'). Electrophysiological studies in awake-behaving monkeys have demonstrated that neurons in early visual areas increase their firing rate when responding to a figure compared to responding to the background. We hypothesized that similar changes in neural firing would take place in early visual areas of the human visual system, leading to changes in the perception of low-level visual features. In this study, we investigated whether contrast perception is affected by figure-ground assignment using stimuli similar to those in the electrophysiological studies in monkeys. We measured contrast discrimination thresholds and perceived contrast for Gabor probes placed on figures or the background and found that the perceived contrast of the probe was increased when it was placed on a figure. Furthermore, we tested how this effect compared with the well-known effect of orientation contrast on perceived contrast. We found that figure-ground assignment and orientation contrast produced changes in perceived contrast of a similar magnitude, and that they interacted. Our results demonstrate that figure-ground assignment influences perceived contrast, consistent with an effect of figure-ground assignment on activity in early visual areas of the human visual system. © 2015 ARVO.
Ito, Rutsuko; Everitt, Barry J; Robbins, Trevor W
2005-01-01
The hippocampus (HPC) is known to be critically involved in the formation of associations between contextual/spatial stimuli and behaviorally significant events, playing a pivotal role in learning and memory. However, increasing evidence indicates that the HPC is also essential for more basic motivational processes. The amygdala, by contrast, is important for learning about the motivational significance of discrete cues. This study investigated the effects of excitotoxic lesions of the rat HPC and the basolateral amygdala (BLA) on the acquisition of a number of appetitive behaviors known to be dependent on the formation of Pavlovian associations between a reward (food) and discrete stimuli or contexts: (1) conditioned/anticipatory locomotor activity to food delivered in a specific context and (2) autoshaping, where rats learn to show conditioned discriminated approach to a discrete visual CS+. While BLA lesions had minimal effects on conditioned locomotor activity, hippocampal lesions facilitated the development of both conditioned activity to food and autoshaping behavior, suggesting that hippocampal lesions may have increased the incentive motivational properties of food and associated conditioned stimuli, consistent with the hypothesis that the HPC is involved in inhibitory processes in appetitive conditioning. (c) 2005 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro
2005-01-01
Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…
Wolfin, Michael S; Raguso, Robert A; Davidowitz, Goggy; Goyret, Joaquin
2018-06-12
The use of sensory information to control behavior usually involves the integration of sensory input from different modalities. This integration is affected by behavioral states and experience, and it is also sensitive to the spatiotemporal patterns of stimulation and other general contextual cues. Following the finding that hawkmoths can use relative humidity (RH) as a proxy for nectar content during close-range foraging, we evaluate here whether RH could be used during locomotive flight under two simulated contexts in a wind tunnel: (1) dispersion and (2) search phase of the foraging behavior. Flying moths showed a bias towards air with a higher RH in a context devoid of foraging stimuli, but the addition of visual and olfactory floral stimuli elicited foraging responses that overrode the behavioral effects of RH. We discuss the results in relation to the putative adaptive value of the context-dependent use of sensory information. © 2018. Published by The Company of Biologists Ltd.
Pellicano, Antonello; Koch, Iring; Binkofski, Ferdinand
2017-09-01
An increasing number of studies have shown a close link between perception and action, which is supposed to be responsible for the automatic activation of actions compatible with objects' properties, such as the orientation of their graspable parts. It has been observed that left and right hand responses to objects (e.g., cups) are faster and more accurate if the handle orientation corresponds to the response location than when it does not. Two alternative explanations have been proposed for this handle-to-hand correspondence effect : location coding and affordance activation. The aim of the present study was to provide disambiguating evidence on the origin of this effect by employing object sets for which the visually salient portion was separated from, and opposite to the graspable 1, and vice versa. Seven experiments were conducted employing both single objects and object pairs as visual stimuli to enhance the contextual information about objects' graspability and usability. Notwithstanding these manipulations intended to favor affordance activation, results fully supported the location-coding account displaying significant Simon-like effects that involved the orientation of the visually salient portion of the object stimulus and the location of the response. Crucially, we provided evidence of Simon-like effects based on higher-level cognitive, iconic representations of action directions rather than based on lower-level spatial coding of the pure position of protruding portions of the visual stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Visual cues that are effective for contextual saccade adaptation
Azadi, Reza
2014-01-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429
Validation of the Contextual Assessment Inventory for Problem Behavior
ERIC Educational Resources Information Center
Carr, Edward G.; Ladd, Mara V.; Schulte, Christine F.
2008-01-01
Problem behavior is a major barrier to successful community integration for people with developmental disabilities. Recently, there has been increased interest in identifying contextual factors involving setting events and discriminative stimuli that impact the display of problem behavior. The authors previously developed the "Contextual…
Poppenk, Jordan; Norman, Kenneth A.
2012-01-01
Recent cognitive research has revealed better source memory performance for familiar relative to novel stimuli. Here we consider two possible explanations for this finding. The source memory advantage for familiar stimuli could arise because stimulus novelty induces attention to stimulus features at the expense of contextual processing, resulting in diminished overall levels of contextual processing at study for novel (vs. familiar) stimuli. Another possibility is that stimulus information retrieved from long-term memory (LTM) provides scaffolding that facilitates the formation of item-context associations. If contextual features are indeed more effectively bound to familiar (vs. novel) items, the relationship between contextual processing at study and subsequent source memory should be stronger for familiar items. We tested these possibilities by applying multi-voxel pattern analysis (MVPA) to a recently collected functional magnetic resonance imaging (fMRI) dataset, with the goal of measuring contextual processing at study and relating it to subsequent source memory performance. Participants were scanned with fMRI while viewing novel proverbs, repeated proverbs (previously novel proverbs that were shown in a pre-study phase), and previously known proverbs in the context of one of two experimental tasks. After scanning was complete, we evaluated participants’ source memory for the task associated with each proverb. Drawing upon fMRI data from the study phase, we trained a classifier to detect on-task processing (i.e., how strongly was the correct task set activated). On-task processing was greater for previously known than novel proverbs and similar for repeated and novel proverbs. However, both within- and across participants, the relationship between on-task processing and subsequent source memory was stronger for repeated than novel proverbs and similar for previously known and novel proverbs. Finally, focusing on the repeated condition, we found that higher levels of hippocampal activity during the pre-study phase, which we used as an index of episodic encoding, led to a stronger relationship between on-task processing at study and subsequent memory. Together, these findings suggest different mechanisms may be primarily responsible for superior source memory for repeated and previously known stimuli. Specifically, they suggest that prior stimulus knowledge enhances memory by boosting the overall level of contextual processing, whereas stimulus repetition enhances the probability that contextual features will be successfully bound to item features. Several possible theoretical explanations for this pattern are discussed. PMID:22820636
Contextual cueing impairment in patients with age-related macular degeneration.
Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan
2013-09-12
Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.
Extinction of Conditioned Responses to Methamphetamine-Associated Stimuli in Healthy Humans.
Cavallo, Joel S; Ruiz, Nicholas A; de Wit, Harriet
2016-07-01
Contextual stimuli present during drug experiences become associated with the drug through Pavlovian conditioning and are thought to sustain drug-seeking behavior. Thus, extinction of conditioned responses is an important target for treatment. To date, acquisition and extinction to drug-paired cues have been studied in animal models or drug-dependent individuals, but rarely in non-drug users. We have recently developed a procedure to study acquisition of conditioned responses after single doses of methamphetamine (MA) in healthy volunteers. Here, we examined extinction of these responses and their persistence after conditioning. Healthy adults (18-35 years; N = 20) received two pairings of audio-visual stimuli with MA (20 mg oral) or placebo. Responses to stimuli were assessed before and after conditioning, using three tasks: behavioral preference, attentional bias, and subjective "liking." Subjects exhibited behavioral preference for the drug-paired stimuli at the first post-conditioning test, but this declined rapidly on subsequent extinction tests. They also exhibited a bias to initially look towards the drug-paired stimuli at the first post-test session, but not thereafter. Subjects who experienced more positive subjective drug effects during conditioning exhibited a smaller decline in preference during the extinction phase. Further, longer inter-session intervals during the extinction phase were associated with less extinction of the behavioral preference measure. Conditioned responses after two pairings with MA extinguish quickly, and are influenced by both subjective drug effects and the extinction interval. Characterizing and refining this conditioning procedure will aid in understanding the acquisition and extinction processes of drug-related conditioned responses in humans.
Extinction of Conditioned Responses to Methamphetamine-Associated Stimuli in Healthy Humans
Cavallo, Joel S.; Ruiz, Nicholas A.; de Wit, Harriet
2016-01-01
Rationale Contextual stimuli present during drug experiences become associated with the drug through Pavlovian conditioning, and are thought to sustain drug-seeking behavior. Thus, extinction of conditioned responses is an important target for treatment. To date, acquisition and extinction to drug-paired cues have been studied in animal models or drug-dependent individuals, but rarely in non drug-users. Objective We have recently developed a procedure to study acquisition of conditioned responses after single doses of methamphetamine (MA) in healthy volunteers. Here we examined extinction of these responses and their persistence after conditioning. Methods Healthy adults (18–35 yrs; N=20) received two pairings of audio-visual stimuli with MA (20 mg oral) or placebo. Responses to stimuli were assessed before and after conditioning, using three tasks: behavioral preference, attentional bias, and subjective ‘liking.’ Results Subjects exhibited behavioral preference for the drug-paired stimuli at the first post-conditioning test, but this declined rapidly on subsequent extinction tests. They also exhibited a bias to initially look towards the drug-paired stimuli at the first post-test session, but not thereafter. Subjects who experienced more positive subjective drug effects during conditioning exhibited a smaller decline in preference during the extinction phase. Further, longer inter-session intervals during the extinction phase were associated with less extinction of the behavioral preference measure. Conclusions Conditioned responses after two pairings with MA extinguish quickly, and are influenced by both subjective drug effects and the extinction interval. Characterizing and refining this conditioning procedure will aid in understanding the acquisition and extinction processes of drug-related conditioned responses in humans. PMID:27113223
Influences of motor contexts on the semantic processing of action-related language.
Yang, Jie
2014-09-01
The contribution of the sensory-motor system to the semantic processing of language stimuli is still controversial. To address the issue, the present article focuses on the impact of motor contexts (i.e., comprehenders' motor behaviors, motor-training experiences, and motor expertise) on the semantic processing of action-related language and reviews the relevant behavioral and neuroimaging findings. The existing evidence shows that although motor contexts can influence the semantic processing of action-related concepts, the mechanism of the contextual influences is still far from clear. Future investigations will be needed to clarify (1) whether motor contexts only modulate activity in motor regions, (2) whether the contextual influences are specific to the semantic features of language stimuli, and (3) what factors can determine the facilitatory or inhibitory contextual influences on the semantic processing of action-related language.
Elaborated contextual framing is necessary for action-based attitude acquisition.
Laham, Simon M; Kashima, Yoshihisa; Dix, Jennifer; Wheeler, Melissa; Levis, Bianca
2014-01-01
Although arm flexion and extension have been implicated as conditioners of attitudes, recent work casts some doubt on the nature and strength of the coupling of these muscle contractions and stimulus evaluation. We propose that the elaborated contextual framing of flexion and extension actions is necessary for attitude acquisition. Results showed that when flexion and extension were disambiguated via elaborated contextual cues (i.e., framed as collect and discard within a foraging context), neutral stimuli processed under flexion were liked more than neutral stimuli processed under extension. However, when unelaborated framing was used (e.g., mere stimulus zooming effects), stimulus evaluation did not differ as a function of muscle contractions. These results suggest that neither arm contractions per se nor unelaborated framings are sufficient for action-based attitude acquisition, but that elaborated framings are necessary.
A neuronal network model for context-dependence of pitch change perception.
Huang, Chengcheng; Englitz, Bernhard; Shamma, Shihab; Rinzel, John
2015-01-01
Many natural stimuli have perceptual ambiguities that can be cognitively resolved by the surrounding context. In audition, preceding context can bias the perception of speech and non-speech stimuli. Here, we develop a neuronal network model that can account for how context affects the perception of pitch change between a pair of successive complex tones. We focus especially on an ambiguous comparison-listeners experience opposite percepts (either ascending or descending) for an ambiguous tone pair depending on the spectral location of preceding context tones. We developed a recurrent, firing-rate network model, which detects frequency-change-direction of successively played stimuli and successfully accounts for the context-dependent perception demonstrated in behavioral experiments. The model consists of two tonotopically organized, excitatory populations, E up and E down, that respond preferentially to ascending or descending stimuli in pitch, respectively. These preferences are generated by an inhibitory population that provides inhibition asymmetric in frequency to the two populations; context dependence arises from slow facilitation of inhibition. We show that contextual influence depends on the spectral distribution of preceding tones and the tuning width of inhibitory neurons. Further, we demonstrate, using phase-space analysis, how the facilitated inhibition from previous stimuli and the waning inhibition from the just-preceding tone shape the competition between the E up and E down populations. In sum, our model accounts for contextual influences on the pitch change perception of an ambiguous tone pair by introducing a novel decoding strategy based on direction-selective units. The model's network architecture and slow facilitating inhibition emerge as predictions of neuronal mechanisms for these perceptual dynamics. Since the model structure does not depend on the specific stimuli, we show that it generalizes to other contextual effects and stimulus types.
McKay, B E; Persinger, M A
2003-04-18
Acute post-training exposures to weak intensity theta-burst stimulation (TBS) patterned complex magnetic fields attenuated the magnitude of conditioned fear learning for contextual stimuli. A similar learning impairment was evoked in a linear and dose-dependent manner by pre-conditioning injections of the polyamine agmatine. The present study examined the hypothesis that whole-body applications of the TBS complex magnetic field pattern when co-administered with systemic agmatine treatment may combine to evoke impairments in contextual fear learning. Within minutes of 4 mg/kg agmatine injections, male Wistar rats were fear conditioned to contextual stimuli and immediately exposed for 30 min to the TBS patterned complex magnetic field or to sham conditions. TBS patterned complex magnetic field treatment was found to linearly summate with the contextual fear learning impairment evoked by agmatine treatment alone. Furthermore, we report for sham-treated rats, but not rats exposed to the synthetic magnetic field pattern, that the magnitude of learned fear decreased and the amount of variability in learning increased, as the K-index (a measure of change in intensity of the time-varying ambient geomagnetic field) increased during the 3-hr intervals over which conditioning and testing sessions were conducted.
Purpura, Keith P.; Victor, Jonathan D.
2014-01-01
Segmenting the visual image into objects is a crucial stage of visual processing. Object boundaries are typically associated with differences in luminance, but discontinuities in texture also play an important role. We showed previously that a subpopulation of neurons in V2 in anesthetized macaques responds to orientation discontinuities parallel to their receptive field orientation. Such single-cell responses could be a neurophysiological correlate of texture boundary detection. Neurons in V1, on the other hand, are known to have contextual response modulations such as iso-orientation surround suppression, which also produce responses to orientation discontinuities. Here, we use pseudorandom multiregion grating stimuli of two frame durations (20 and 40 ms) to probe and compare texture boundary responses in V1 and V2 in anesthetized macaque monkeys. In V1, responses to texture boundaries were observed for only the 40 ms frame duration and were independent of the orientation of the texture boundary. However, in transient V2 neurons, responses to such texture boundaries were robust for both frame durations and were stronger for boundaries parallel to the neuron's preferred orientation. The dependence of these processes on stimulus duration and orientation indicates that responses to texture boundaries in V2 arise independently of contextual modulations in V1. In addition, because the responses in transient V2 neurons are sensitive to the orientation of the texture boundary but those of V1 neurons are not, we suggest that V2 responses are the correlate of texture boundary detection, whereas contextual modulation in V1 serves other purposes, possibly related to orientation “pop-out.” PMID:24599456
Visual cues that are effective for contextual saccade adaptation.
Azadi, Reza; Harwood, Mark R
2014-06-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.
Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics
Coen-Cagli, Ruben; Dayan, Peter; Schwartz, Odelia
2012-01-01
Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience. PMID:22396635
High visual resolution matters in audiovisual speech perception, but only for some.
Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G
2016-07-01
The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.
Contextual control of attentional allocation in human discrimination learning.
Uengoer, Metin; Lachnit, Harald; Lotz, Anja; Koenig, Stephan; Pearce, John M
2013-01-01
In 3 human predictive learning experiments, we investigated whether the allocation of attention can come under the control of contextual stimuli. In each experiment, participants initially received a conditional discrimination for which one set of cues was trained as relevant in Context 1 and irrelevant in Context 2, and another set was relevant in Context 2 and irrelevant in Context 1. For Experiments 1 and 2, we observed that a second discrimination based on cues that had previously been trained as relevant in Context 1 during the conditional discrimination was acquired more rapidly in Context 1 than in Context 2. Experiment 3 revealed a similar outcome when new stimuli from the original dimensions were used in the test stage. Our results support the view that the associability of a stimulus can be controlled by the stimuli that accompany it.
Brain Processing of Emotional Scenes in Aging: Effect of Arousal and Affective Context
Mathieu, Nicolas Gilles; Gentaz, Edouard; Harquel, Sylvain; Vercueil, Laurent; Chauvin, Alan; Bonnet, Stéphane; Campagne, Aurélie
2014-01-01
Research on emotion showed an increase, with age, in prevalence of positive information relative to negative ones. This effect is called positivity effect. From the cerebral analysis of the Late Positive Potential (LPP), sensitive to attention, our study investigated to which extent the arousal level of negative scenes is differently processed between young and older adults and, to which extent the arousal level of negative scenes, depending on its value, may contextually modulate the cerebral processing of positive (and neutral) scenes and favor the observation of a positivity effect with age. With this aim, two negative scene groups characterized by two distinct arousal levels (high and low) were displayed into two separate experimental blocks in which were included positive and neutral pictures. The two blocks only differed by their negative pictures across participants, as to create two negative global contexts for the processing of the positive and neutral pictures. The results show that the relative processing of different arousal levels of negative stimuli, reflected by LPP, appears similar between the two age groups. However, a lower activity for negative stimuli is observed with the older group for both tested arousal levels. The processing of positive information seems to be preserved with age and is also not contextually impacted by negative stimuli in both younger and older adults. For neutral stimuli, a significantly reduced activity is observed for older adults in the contextual block of low-arousal negative stimuli. Globally, our study reveals that the positivity effect is mainly due to a modulation, with age, in processing of negative stimuli, regardless of their arousal level. It also suggests that processing of neutral stimuli may be modulated with age, depending on negative context in which they are presented to. These age-related effects could contribute to justify the differences in emotional preference with age. PMID:24932857
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
Task-relevant information is prioritized in spatiotemporal contextual cueing.
Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun
2016-11-01
Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1976-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
The effect of contextual cues on the encoding of motor memories.
Howard, Ian S; Wolpert, Daniel M; Franklin, David W
2013-05-01
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Conflict adaptation in time: foreperiods as contextual cues for attentional adjustment.
Wendt, Mike; Kiesel, Andrea
2011-10-01
Interference evoked by distractor stimulus information, such as flankers in the Eriksen task, is reduced when the proportion of conflicting stimuli is increased. This modulation is sensitive to contextual cues such as stimulus location or color, suggesting attentional adjustment to conflict contingencies on the basis of context information. In the present study, we explored whether conflict adjustment is modulated by temporal variation of conflict likelihood. To this end, we associated low and high proportions of conflict stimuli with foreperiods of different lengths. Flanker interference was higher with foreperiods associated with low conflict proportions, suggesting that participants use the foreperiod as a contextual cue for attentional adjustment. We conjecture that participants initially adopt the strategy useful for conflict contingencies associated with short foreperiods, and then readjust during the trial, in the absence of any additional exogenous cue, when the imperative stimulus has not occurred during a certain time interval.
Enhanced discrimination between threatening and safe contexts in high-anxious individuals
Glotzbach-Schoon, Evelyn; Tadda, Regina; Andreatta, Marta; Tröger, Christian; Ewald, Heike; Grillon, Christian; Pauli, Paul; Mühlberger, Andreas
2014-01-01
Trait anxiety, a stable personality trait associated with increased fear responses to threat, is regarded as a risk factor for the development and maintenance of anxiety disorders. Although the effect of trait anxiety has been examined with regard to explicit threat cues, little is known about the effect of trait anxiety on contextual threat learning. To assess this issue, extreme groups of low and high trait anxiety underwent a contextual fear conditioning protocol using virtual reality. Two virtual office rooms served as the conditioned contexts. One virtual office room (CXT+) was paired with unpredictable electrical stimuli. In the other virtual office room, no electrical stimuli were delivered (CXT−). High-anxious participants tended to show faster acquisition of startle potentiation in the CXT+ versus the CXT− than low-anxious participants. This enhanced contextual fear learning might function as a risk factor for anxiety disorders that are characterized by sustained anxiety. PMID:23384512
Restrictive vs. non-restrictive composition: a magnetoencephalography study
Leffel, Timothy; Lauter, Miriam; Westerlund, Masha; Pylkkänen, Liina
2014-01-01
Recent research on the brain mechanisms underlying language processing has implicated the left anterior temporal lobe (LATL) as a central region for the composition of simple phrases. Because these studies typically present their critical stimuli without contextual information, the sensitivity of LATL responses to contextual factors is unknown. In this magnetoencephalography (MEG) study, we employed a simple question-answer paradigm to manipulate whether a prenominal adjective or determiner is interpreted restrictively, i.e., as limiting the set of entities under discussion. Our results show that the LATL is sensitive to restriction, with restrictive composition eliciting higher responses than non-restrictive composition. However, this effect was only observed when the restricting element was a determiner, adjectival stimuli showing the opposite pattern, which we hypothesise to be driven by the special pragmatic properties of non-restrictive adjectives. Overall, our results demonstrate a robust sensitivity of the LATL to high level contextual and potentially also pragmatic factors. PMID:25379512
A Unifying Motif for Spatial and Directional Surround Suppression.
Liu, Liu D; Miller, Kenneth D; Pack, Christopher C
2018-01-24
In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1973-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Poppenk, Jordan; Norman, Kenneth A
2012-11-01
Recent cognitive research has revealed better source memory performance for familiar relative to novel stimuli. Here we consider two possible explanations for this finding. The source memory advantage for familiar stimuli could arise because stimulus novelty induces attention to stimulus features at the expense of contextual processing, resulting in diminished overall levels of contextual processing at study for novel (vs. familiar) stimuli. Another possibility is that stimulus information retrieved from long-term memory (LTM) provides scaffolding that facilitates the formation of item-context associations. If contextual features are indeed more effectively bound to familiar (vs. novel) items, the relationship between contextual processing at study and subsequent source memory should be stronger for familiar items. We tested these possibilities by applying multi-voxel pattern analysis (MVPA) to a recently collected functional magnetic resonance imaging (fMRI) dataset, with the goal of measuring contextual processing at study and relating it to subsequent source memory performance. Participants were scanned with fMRI while viewing novel proverbs, repeated proverbs (previously novel proverbs that were shown in a pre-study phase), and previously known proverbs in the context of one of two experimental tasks. After scanning was complete, we evaluated participants' source memory for the task associated with each proverb. Drawing upon fMRI data from the study phase, we trained a classifier to detect on-task processing (i.e., how strongly was the correct task set activated). On-task processing was greater for previously known than novel proverbs and similar for repeated and novel proverbs. However, both within and across participants, the relationship between on-task processing and subsequent source memory was stronger for repeated than novel proverbs and similar for previously known and novel proverbs. Finally, focusing on the repeated condition, we found that higher levels of hippocampal activity during the pre-study phase, which we used as an index of episodic encoding, led to a stronger relationship between on-task processing at study and subsequent memory. Together, these findings suggest different mechanisms may be primarily responsible for superior source memory for repeated and previously known stimuli. Specifically, they suggest that prior stimulus knowledge enhances memory by boosting the overall level of contextual processing, whereas stimulus repetition enhances the probability that contextual features will be successfully bound to item features. Several possible theoretical explanations for this pattern are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dissociation of the Neural Correlates of Visual and Auditory Contextual Encoding
ERIC Educational Resources Information Center
Gottlieb, Lauren J.; Uncapher, Melina R.; Rugg, Michael D.
2010-01-01
The present study contrasted the neural correlates of encoding item-context associations according to whether the contextual information was visual or auditory. Subjects (N = 20) underwent fMRI scanning while studying a series of visually presented pictures, each of which co-occurred with either a visually or an auditorily presented name. The task…
Raber, Jacob; Allen, Antiño R; Rosi, Susanna; Sharma, Sourabh; Dayger, Catherine; Davis, Matthew J; Fike, John R
2013-06-01
The space radiation environment contains high-energy charged particles such as (56)Fe, which could pose a significant hazard to hippocampal function in astronauts during and after the mission(s). The mechanisms underlying impairments in cognition are not clear but might involve alterations in the percentage of neurons in the dentate gyrus expressing the plasticity-related immediate early gene Arc. Previously, we showed effects of cranial (56)Fe irradiation on hippocampus-dependent contextual freezing and on the percentage of Arc-positive cells in the enclosed, but not free, blade. Because it is unclear whether whole body (56)Fe irradiation causes similar effects on these markers of hippocampal function, in the present study we quantified the effects of whole body (56)Fe irradiation (600MeV, 0.5 or 1Gy) on hippocampus-dependent and hippocampus-independent cognitive performance and determined whether these effects were associated with changes in Arc expression in the enclosed and free blades of the dentate gyrus. Whole body (56)Fe irradiation impacted contextual but not cued fear freezing and the percentage of Arc-positive cells in the enclosed and free blades. In mice tested for contextual freezing, there was a correlation between Arc-positive cells in the enclosed and free blades. In addition, in mice irradiated with 0.5Gy, contextual freezing in the absence of aversive stimuli correlated with the percentage of Arc-positive cells in the enclosed blade. In mice tested for cued freezing, there was no correlation between Arc-positive cells in the enclosed and free blades. In contrast, cued freezing in the presence or absence of aversive stimuli correlated with Arc-positive cells in the free blade. In addition, in mice irradiated with 1Gy cued freezing in the absence of aversive stimuli correlated with the percentage of Arc-positive neurons in the free blade. These data indicate that while whole body (56)Fe radiation affects contextual freezing and Arc-positive cells in the dentate gyrus, the enclosed blade might be more important for contextual freezing while the free blade might be more important for cued freezing. Copyright © 2013 Elsevier B.V. All rights reserved.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Maddux, Jean-Marie; Lacroix, Franca; Chaudhri, Nadia
2014-09-19
Environmental contexts in which drugs of abuse are consumed can trigger craving, a subjective Pavlovian-conditioned response that can facilitate drug-seeking behavior and prompt relapse in abstinent drug users. We have developed a procedure to study the behavioral and neural processes that mediate the impact of context on alcohol-seeking behavior in rats. Following acclimation to the taste and pharmacological effects of 15% ethanol in the home cage, male Long-Evans rats receive Pavlovian discrimination training (PDT) in conditioning chambers. In each daily (Mon-Fri) PDT session, 16 trials each of two different 10 sec auditory conditioned stimuli occur. During one stimulus, the CS+, 0.2 ml of 15% ethanol is delivered into a fluid port for oral consumption. The second stimulus, the CS-, is not paired with ethanol. Across sessions, entries into the fluid port during the CS+ increase, whereas entries during the CS- stabilize at a lower level, indicating that a predictive association between the CS+ and ethanol is acquired. During PDT each chamber is equipped with a specific configuration of visual, olfactory and tactile contextual stimuli. Following PDT, extinction training is conducted in the same chamber that is now equipped with a different configuration of contextual stimuli. The CS+ and CS- are presented as before, but ethanol is withheld, which causes a gradual decline in port entries during the CS+. At test, rats are placed back into the PDT context and presented with the CS+ and CS- as before, but without ethanol. This manipulation triggers a robust and selective increase in the number of port entries made during the alcohol predictive CS+, with no change in responding during the CS-. This effect, referred to as context-induced renewal, illustrates the powerful capacity of contexts associated with alcohol consumption to stimulate alcohol-seeking behavior in response to Pavlovian alcohol cues.
Using Embedded Visual Coding to Support Contextualization of Historical Texts
ERIC Educational Resources Information Center
Baron, Christine
2016-01-01
This mixed-method study examines the think-aloud protocols of 48 randomly assigned undergraduate students to understand what effect embedding a visual coding system, based on reliable visual cues for establishing historical time period, would have on novice history students' ability to contextualize historic documents. Results indicate that using…
Political ideology is contextually variable and flexible rather than fixed.
Morgan, G Scott; Skitka, Linda J; Wisneski, Daniel C
2014-06-01
Hibbing et al. argue that the liberal-conservative continuum is (a) universal and (b) grounded in psychological differences in sensitivity to negative stimuli. Our commentary argues that both claims overlook the importance of context. We review evidence that the liberal-conservative continuum is far from universal and that ideological differences are contextually flexible rather than fixed.
Retell, James D; Becker, Stefanie I; Remington, Roger W
2016-01-01
An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3-6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations.
Using Retrieval Cues to Attenuate Return of Fear in Individuals With Public Speaking Anxiety.
Shin, Ki Eun; Newman, Michelle G
2018-03-01
Even after successful exposure, relapse is not uncommon. Based on the retrieval model of fear extinction (e.g., Vervliet, Craske, & Hermans, 2013), return of fear can occur after exposure due to an elapse of time (spontaneous recovery) or change in context (contextual renewal). The use of external salient stimuli presented throughout extinction (i.e., retrieval cues [RCs]) has been suggested as a potential solution to this problem (Bouton, 2002). The current study examined whether RCs attenuated return of fear in individuals with public speaking anxiety. Sixty-five participants completed a brief exposure while presented with two RC stimuli aimed at a variety of senses (visual, tactile, olfactory, and auditory). Later, half the participants were tested for return of fear in a context different from the exposure context, and the other half in the same context. Half of each context group were presented with the same cues as in exposure, while the other half were not. Return of fear due to an elapse of time, change in context, and effects of RCs were evaluated on subjective, behavioral, and physiological measures of anxiety. Although contextual renewal was not observed, results supported effects of RCs in reducing spontaneous recovery on behavioral and physiological measures of anxiety. There was also evidence that participants who were reminded of feeling anxious during exposure by the RCs benefited more from using them at follow-up, whereas those who perceived the cues as comforting (safety signals) benefited less. Clinical implications of the findings are discussed. Copyright © 2017. Published by Elsevier Ltd.
Temporal and peripheral extraction of contextual cues from scenes during visual search.
Koehler, Kathryn; Eckstein, Miguel P
2017-02-01
Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
ERIC Educational Resources Information Center
Poppenk, Jordan; Norman, Kenneth A.
2012-01-01
Recent cognitive research has revealed better source memory performance for familiar relative to novel stimuli. Here we consider two possible explanations for this finding. The source memory advantage for familiar stimuli could arise because stimulus novelty induces attention to stimulus features at the expense of contextual processing, resulting…
Visual Prediction Error Spreads Across Object Features in Human Visual Cortex
Summerfield, Christopher; Egner, Tobias
2016-01-01
Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936
Neural responses in the macaque v1 to bar stimuli with various lengths presented on the blind spot.
Matsumoto, Masayuki; Komatsu, Hidehiko
2005-05-01
Although there is no retinal input within the blind spot, it is filled with the same visual attributes as its surround. Earlier studies showed that neural responses are evoked at the retinotopic representation of the blind spot in the primary visual cortex (V1) when perceptual filling-in of a surface or completion of a bar occurs. To determine whether these neural responses correlate with perception, we recorded from V1 neurons whose receptive fields overlapped the blind spot. Bar stimuli of various lengths were presented at the blind spots of monkeys while they performed a fixation task. One end of the bar was fixed at a position outside the blind spot, and the position of the other end was varied. Perceived bar length was measured using a similar set of bar stimuli in human subjects. As long as one end of the bar was inside the blind spot, the perceived bar length remained constant, and when the bar exceeded the blind spot, perceptual completion occurred, and the perceived bar length increased substantially. Some V1 neurons of the monkey exhibited a significant increase in their activity when the bar exceeded the blind spot, even though the amount of the retinal stimulation increased only slightly. These response increases coincided with perceptual completion observed in human subjects and were much larger than would be expected from simple spatial summation and could not be explained by contextual modulation. We conclude that the completed bar appearing on the part of the receptive field embedded within the blind spot gave rise to the observed increase in neuronal activity.
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Decreased visual detection during subliminal stimulation.
Bareither, Isabelle; Villringer, Arno; Busch, Niko A
2014-10-17
What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.
False memory for context and true memory for context similarly activate the parahippocampal cortex.
Karanian, Jessica M; Slotnick, Scott D
2017-06-01
The role of the parahippocampal cortex is currently a topic of debate. One view posits that the parahippocampal cortex specifically processes spatial layouts and sensory details (i.e., the visual-spatial processing view). In contrast, the other view posits that the parahippocampal cortex more generally processes spatial and non-spatial contexts (i.e., the general contextual processing view). A large number of studies have found that true memories activate the parahippocampal cortex to a greater degree than false memories, which would appear to support the visual-spatial processing view as true memories are typically associated with greater visual-spatial detail than false memories. However, in previous studies, contextual details were also greater for true memories than false memories. Thus, such differential activity in the parahippocampal cortex may have reflected differences in contextual processing, which would challenge the visual-spatial processing view. In the present functional magnetic resonance imaging (fMRI) study, we employed a source memory paradigm to investigate the functional role of the parahippocampal cortex during true memory and false memory for contextual information to distinguish between the visual-spatial processing view and the general contextual processing view. During encoding, abstract shapes were presented to the left or right of fixation. During retrieval, old shapes were presented at fixation and participants indicated whether each shape was previously on the "left" or "right" followed by an "unsure", "sure", or "very sure" confidence rating. The conjunction of confident true memories for context and confident false memories for context produced activity in the parahippocampal cortex, which indicates that this region is associated with contextual processing. Furthermore, the direct contrast of true memory and false memory produced activity in the visual cortex but did not produce activity in the parahippocampal cortex. The present evidence suggests that the parahippocampal cortex is associated with general contextual processing rather than only being associated with visual-spatial processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B
2004-01-01
The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.
Zang, Xuelian; Shi, Zhuanghua; Müller, Hermann J; Conci, Markus
2017-05-01
Learning of spatial inter-item associations can speed up visual search in everyday life, an effect referred to as contextual cueing (Chun & Jiang, 1998). Whereas previous studies investigated contextual cueing primarily using 2D layouts, the current study examined how 3D depth influences contextual learning in visual search. In two experiments, the search items were presented evenly distributed across front and back planes in an initial training session. In the subsequent test session, the search items were either swapped between the front and back planes (Experiment 1) or between the left and right halves (Experiment 2) of the displays. The results showed that repeated spatial contexts were learned efficiently under 3D viewing conditions, facilitating search in the training sessions, in both experiments. Importantly, contextual cueing remained robust and virtually unaffected following the swap of depth planes in Experiment 1, but it was substantially reduced (to nonsignificant levels) following the left-right side swap in Experiment 2. This result pattern indicates that spatial, but not depth, inter-item variations limit effective contextual guidance. Restated, contextual cueing (even under 3D viewing conditions) is primarily based on 2D inter-item associations, while depth-defined spatial regularities are probably not encoded during contextual learning. Hence, changing the depth relations does not impact the cueing effect.
Contextual signals in visual cortex.
Khan, Adil G; Hofer, Sonja B
2018-06-05
Vision is an active process. What we perceive strongly depends on our actions, intentions and expectations. During visual processing, these internal signals therefore need to be integrated with the visual information from the retina. The mechanisms of how this is achieved by the visual system are still poorly understood. Advances in recording and manipulating neuronal activity in specific cell types and axonal projections together with tools for circuit tracing are beginning to shed light on the neuronal circuit mechanisms of how internal, contextual signals shape sensory representations. Here we review recent work, primarily in mice, that has advanced our understanding of these processes, focusing on contextual signals related to locomotion, behavioural relevance and predictions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Mannion, Damien J; Donkin, Chris; Whitford, Thomas J
2017-01-01
We investigated the relationship between psychometrically-defined schizotypy and the ability to detect a visual target pattern. Target detection is typically impaired by a surrounding pattern (context) with an orientation that is parallel to the target, relative to a surrounding pattern with an orientation that is orthogonal to the target (orientation-dependent contextual modulation). Based on reports that this effect is reduced in those with schizophrenia, we hypothesised that there would be a negative relationship between the relative score on psychometrically-defined schizotypy and the relative effect of orientation-dependent contextual modulation. We measured visual contrast detection thresholds and scores on the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) from a non-clinical sample ( N = 100). Contrary to our hypothesis, we find an absence of a monotonic relationship between the relative magnitude of orientation-dependent contextual modulation of visual contrast detection and the relative score on any of the subscales of the O-LIFE. The apparent difference of this result with previous reports on those with schizophrenia suggests that orientation-dependent contextual modulation may be an informative condition in which schizophrenia and psychometrically-defined schizotypy are dissociated. However, further research is also required to clarify the strength of orientation-dependent contextual modulation in those with schizophrenia.
Li, W; Thier, P; Wehrhahn, C
2000-02-01
We studied the effects of various patterns as contextual stimuli on human orientation discrimination, and on responses of neurons in V1 of alert monkeys. When a target line is presented along with various contextual stimuli (masks), human orientation discrimination is impaired. For most V1 neurons, responses elicited by a line in the receptive field (RF) center are suppressed by these contextual patterns. Orientation discrimination thresholds of human observers are elevated slightly when the target line is surrounded by orthogonal lines. For randomly oriented lines, thresholds are elevated further and even more so for lines parallel to the target. Correspondingly, responses of most V1 neurons to a line are suppressed. Although contextual lines inhibit the amplitude of orientation tuning functions of most V1 neurons, they do not systematically alter the tuning width. Elevation of human orientation discrimination thresholds decreases with increasing curvature of masking lines, so does the inhibition of V1 neuronal responses. A mask made of straight lines yields the strongest interference with human orientation discrimination and produces the strongest suppression of neuronal responses. Elevation of human orientation discrimination thresholds is highest when a mask covers only the immediate vicinity of the target line. Increasing the masking area results in less interference. On the contrary, suppression of neuronal responses in V1 increases with increasing mask size. Our data imply that contextual interference observed in human orientation discrimination is in part directly related to contextual inhibition of neuronal activity in V1. However, the finding that interference with orientation discrimination is weaker for larger masks suggests a figure-ground segregation process that is not located in V1.
Working memory dependence of spatial contextual cueing for visual search.
Pollmann, Stefan
2018-05-10
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.
Frings, Christian; Rothermund, Klaus
2017-11-01
Perception and action are closely related. Responses are assumed to be represented in terms of their perceptual effects, allowing direct links between action and perception. In this regard, the integration of features of stimuli (S) and responses (R) into S-R bindings is a key mechanism for action control. Previous research focused on the integration of object features with response features while neglecting the context in which an object is perceived. In 3 experiments, we analyzed whether contextual features can also become integrated into S-R episodes. The data showed that a fundamental principle of visual perception, figure-ground segmentation, modulates the binding of contextual features. Only features belonging to the figure region of a context but not features forming the background were integrated with responses into S-R episodes, retrieval of which later on had an impact upon behavior. Our findings suggest that perception guides the selection of context features for integration with responses into S-R episodes. Results of our study have wide-ranging implications for an understanding of context effects in learning and behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Perceptions and choices of Brazilian children as consumers of food products.
Mazzonetto, A C; Fiates, G M R
2014-07-01
In order to identify children's perceptions about food choices and their behavior as consumers and influencers of food purchases, 16 focus groups were conducted with 71 students aged 8-10 years. Transcriptions were submitted to lexical analysis using the Alceste software. The initial contextual unit broke down into 1469 elementary contextual units, 84% of which were retained in the descending hierarchical classification. Results from the larger and more specific classes are reported here. Children were students from public schools where energy-dense nutrient-poor (EDNP) food consumption was severely restricted, but these foods were still bought by the children themselves or requested from their parents. Television shows and advertisements motivated food consumption in general, and consumption of EDNP foods was associated with social events and eating outside the home. Situations that emphasize the pleasure and satisfaction of not eating according to food guidelines are being addressed by traditional educational strategies directed at the individual. Appealing to the senses and employing visual stimuli to get to the affective component of children's attitudes seems to be an alternative tool for promoting healthy eating, instead of the traditional approach based on recommendations and restrictions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Garbusow, Maria; Schad, Daniel J; Sebold, Miriam; Friedel, Eva; Bernhardt, Nadine; Koch, Stefan P; Steinacher, Bruno; Kathmann, Norbert; Geurts, Dirk E M; Sommer, Christian; Müller, Dirk K; Nebe, Stephan; Paul, Sören; Wittchen, Hans-Ulrich; Zimmermann, Ulrich S; Walter, Henrik; Smolka, Michael N; Sterzer, Philipp; Rapp, Michael A; Huys, Quentin J M; Schlagenhauf, Florian; Heinz, Andreas
2016-05-01
In detoxified alcohol-dependent patients, alcohol-related stimuli can promote relapse. However, to date, the mechanisms by which contextual stimuli promote relapse have not been elucidated in detail. One hypothesis is that such contextual stimuli directly stimulate the motivation to drink via associated brain regions like the ventral striatum and thus promote alcohol seeking, intake and relapse. Pavlovian-to-Instrumental-Transfer (PIT) may be one of those behavioral phenomena contributing to relapse, capturing how Pavlovian conditioned (contextual) cues determine instrumental behavior (e.g. alcohol seeking and intake). We used a PIT paradigm during functional magnetic resonance imaging to examine the effects of classically conditioned Pavlovian stimuli on instrumental choices in n = 31 detoxified patients diagnosed with alcohol dependence and n = 24 healthy controls matched for age and gender. Patients were followed up over a period of 3 months. We observed that (1) there was a significant behavioral PIT effect for all participants, which was significantly more pronounced in alcohol-dependent patients; (2) PIT was significantly associated with blood oxygen level-dependent (BOLD) signals in the nucleus accumbens (NAcc) in subsequent relapsers only; and (3) PIT-related NAcc activation was associated with, and predictive of, critical outcomes (amount of alcohol intake and relapse during a 3 months follow-up period) in alcohol-dependent patients. These observations show for the first time that PIT-related BOLD signals, as a measure of the influence of Pavlovian cues on instrumental behavior, predict alcohol intake and relapse in alcohol dependence. © 2015 Society for the Study of Addiction.
Contextual remapping in visual search after predictable target-location changes.
Conci, Markus; Sun, Luning; Müller, Hermann J
2011-07-01
Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance ('contextual cueing'). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not 'predictable' (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively 'remapped' to accommodate new task requirements.
The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.
van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R
2018-05-04
Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Configural learning in contextual cuing of visual search.
Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R
2016-08-01
Two experiments were conducted to explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training participants with repeating contexts of consistent configurations led to stronger contextual cuing than when participants were trained with contexts of inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Temporal Influence on Awareness
1995-12-01
43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and
Sex differences in visual attention to erotic and non-erotic stimuli.
Lykins, Amy D; Meana, Marta; Strauss, Gregory P
2008-04-01
It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Central and peripheral vision loss differentially affects contextual cueing in visual search.
Geringswald, Franziska; Pollmann, Stefan
2015-09-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).
Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.
Schankin, Andrea; Schubö, Anna
2009-05-01
Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.
Papera, Massimiliano; Richards, Anne
2016-05-01
Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
Dissociating emotion-induced blindness and hypervision.
Bocanegra, Bruno R; Zeelenberg, René
2009-12-01
Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.
The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.
Coco, Moreno I; Malcolm, George L; Keller, Frank
2014-01-01
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.
Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-01-01
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245
Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-05-24
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.
The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye
NASA Astrophysics Data System (ADS)
Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis
2008-09-01
In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).
Function Transformation without Reinforcement
Tonneau, François; Arreola, Fara; Martínez, Alma Gabriela
2006-01-01
In studies of function transformation, participants initially are taught to match stimuli in the presence of a contextual cue, X; the stimuli to be matched bear some formal relation to each other, for example, a relation of opposition or difference. In a second phase, the participants are taught to match arbitrary stimuli (say, A and B) in the presence of X. In a final test, A often displays behavioral functions that differ from those of B, and can be predicted from the nature of the relation associated with X in the initial training phase. Here we report function-transformation effects in the absence of selection responses and of their reinforcers. In three experiments with college students, exposure to relations of difference or identity modified the responses given to later stimuli. In Experiment 1, responses to a test stimulus A varied depending on preexposure to pairs of colors that were distinct from A but exemplified relations of difference or identity. In Experiment 2, a stimulus A acquired distinct functions, depending on its previous pairing with a contextual cue X that had itself been paired with identity or difference among colors. Experiment 3 confirmed the results of Experiment 2 with a modified design. Our data are consistent with the notion that relations of identity or difference can serve as stimuli for Pavlovian processes, and, in compound with other cues, produce apparent function-transformation effects. PMID:16776058
Function transformation without reinforcement.
Tonneau, Franćois; Arreola, Fara; Martínez, Alma Gabriela
2006-05-01
In studies of function transformation, participants initially are taught to match stimuli in the presence of a contextual cue, X; the stimuli to be matched bear some formal relation to each other, for example, a relation of opposition or difference. In a second phase, the participants are taught to match arbitrary stimuli (say, A and B) in the presence of X. In a final test, A often displays behavioral functions that differ from those of B, and can be predicted from the nature of the relation associated with X in the initial training phase. Here we report function-transformation effects in the absence of selection responses and of their reinforcers. In three experiments with college students, exposure to relations of difference or identity modified the responses given to later stimuli. In Experiment 1, responses to a test stimulus A varied depending on preexposure to pairs of colors that were distinct from A but exemplified relations of difference or identity. In Experiment 2, a stimulus A acquired distinct functions, depending on its previous pairing with a contextual cue X that had itself been paired with identity or difference among colors. Experiment 3 confirmed the results of Experiment 2 with a modified design. Our data are consistent with the notion that relations of identity or difference can serve as stimuli for Pavlovian processes, and, in compound with other cues, produce apparent function-transformation effects.
Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana
2015-05-01
To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.
Thalamic nuclei convey diverse contextual information to layer 1 of visual cortex
Imhof, Fabia; Martini, Francisco J.; Hofer, Sonja B.
2017-01-01
Sensory perception depends on the context within which a stimulus occurs. Prevailing models emphasize cortical feedback as the source of contextual modulation. However, higher-order thalamic nuclei, such as the pulvinar, interconnect with many cortical and subcortical areas, suggesting a role for the thalamus in providing sensory and behavioral context – yet the nature of the signals conveyed to cortex by higher-order thalamus remains poorly understood. Here we use axonal calcium imaging to measure information provided to visual cortex by the pulvinar equivalent in mice, the lateral posterior nucleus (LP), as well as the dorsolateral geniculate nucleus (dLGN). We found that dLGN conveys retinotopically precise visual signals, while LP provides distributed information from the visual scene. Both LP and dLGN projections carry locomotion signals. However, while dLGN inputs often respond to positive combinations of running and visual flow speed, LP signals discrepancies between self-generated and external visual motion. This higher-order thalamic nucleus therefore conveys diverse contextual signals that inform visual cortex about visual scene changes not predicted by the animal’s own actions. PMID:26691828
Renewal, Resurgence, and Alternative Reinforcement Context
Sweeney, Mary M.; Shahan, Timothy A.
2015-01-01
Resurgence, relapse induced by the removal of alternative reinforcement, and renewal, relapse induced by a change in contextual stimuli, are typically studied separately in operant conditioning paradigms. In analogous treatments of operant problem behavior, aspects of both relapse phenomena can operate simultaneously. Therefore, the purpose of this study was to examine a novel method for studying resurgence and renewal in the same experimental preparation. An alternative source of reinforcement was available during extinction for one group of rats (a typical resurgence preparation). Another group experienced an operant renewal preparation in which the extinction context was distinguished via olfactory and visual stimuli. A third group experienced alternative reinforcement delivery in the new context, a novel combination of typical resurgence and renewal preparations. Removal of alternative reinforcement and/or a change in context induced relapse, relative to an extinction-only control group. When alternative reinforcement was delivered in a novel context, the alternative response was less persistent relative to when extinction of the alternative response took place in the context in which it was trained. This methodology might be used to illustrate shared (or distinct) mechanisms of resurgence and renewal, and to determine how delivering alternative reinforcement in another context may affect persistence and relapse. PMID:25936876
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J
2011-10-01
Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.
Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian
2013-07-09
In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.
The face-specific proportion congruency effect: social stimuli as contextual cues.
Jiménez-Moya, Gloria; Rodríguez-Bailón, Rosa; Lupiáñez, Juan
2018-06-18
Previous research shows that larger interference is observed in contexts associated with a high proportion of congruent trials than in those associated with a low proportion of congruent trials. Given that one of the most relevant contexts for human beings is social context, researchers have recently explored the possibility that social stimuli could also work as contextual cues for the allocation of attentional control. In fact, it has been shown that individuals use social categories (i.e., men and women) as cues to allocate attentional control. In this work, we go further by showing that individual faces (instead of the social categories they belong to) associated with a high proportion of congruent trials can also lead to larger interference effects compared to individual faces predicting a relatively low proportion of congruent trials. Furthermore, we show that faces associated with a high proportion of congruent trials are more positively evaluated than faces associated with a high proportion of incongruent trials. These results demonstrate that unique human faces are potential contextual cues than can be employed to apply cognitive control when performing an automatic task.
Filingeri, Davide; Morris, Nathan B; Jay, Ollie
2017-01-01
What is the central question of this study? Investigations on inhibitory/facilitatory modulation of vision, touch and pain show that conditioning stimuli outside the receptive field of testing stimuli modulate the central processing of visual, touch and painful stimuli. We asked whether contextual modulation also exists in human temperature integration. What is the main finding and its importance? Progressive decreases in whole-body mean skin temperature (the conditioning stimulus) significantly increased local thermosensitivity to skin warming but not cooling (the testing stimuli) in a dose-dependent fashion. In resembling the central mechanisms underlying endogenous analgesia, our findings point to the existence of an endogenous thermosensory system in humans that could modulate local skin thermal sensitivity to facilitate thermal behaviour. Although inhibitory/facilitatory central modulation of vision and pain has been investigated, contextual modulation of skin temperature integration has not been explored. Hence, we tested whether progressive decreases in whole-body mean skin temperature (T sk ; a large conditioning stimulus) alter the magnitude estimation of local warming and cooling stimuli applied to hairy and glabrous skin. On four separate occasions, eight men (27 ± 5 years old) underwent a 30 min whole-body cooling protocol (water-perfused suit; temperature, 5°C), during which a quantitative thermosensory test, consisting of reporting the perceived magnitude of warming and cooling stimuli (±8°C from 30°C baseline) applied to the hand (palm/dorsum) and foot (sole/dorsum), was performed before cooling and every 10 min thereafter. The cooling protocol resulted in large progressive reductions in T sk [10 min, -3.36°C (95% confidence interval -2.62 to -4.10); 20 min, -5.21°C (-4.47 to -5.95); and 30 min, -6.32°C (-5.58 to -7.05); P < 0.001], with minimal changes (∼0.08°C) in rectal temperature. While thermosensitivity to local skin cooling remained unchanged (P = 0.831), sensitivity to skin warming increased significantly at each level of T sk for all skin regions [10 min, +4.9% (-1.1 to +11.0); 20 min, +6.1% (+0.1-12.2); and 30 min, +7.9% (+1.9-13.9); P = 0.009]. Linear regression indicated a 1.2% °C -1 increase in warm thermosensitivity with whole-body skin cooling. Overall, large decreases in T sk significantly facilitated warm but not cold sensory processing of local thermal stimuli, in a dose-dependent fashion. In highlighting a novel feature of human temperature integration, these findings point to the existence of an endogenous thermosensory system that could modulate local skin thermal sensitivity in relationship to whole-body thermal states. © 2016 The Authors. Experimental Physiology © 2016 The Physiological Society.
Sysoeva, Olga V; Galuta, Ilia A; Davletshina, Maria S; Orekhova, Elena V; Stroganova, Tatiana A
2017-01-01
Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD-a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 < IQ < 127) and 44 typically developing (TD) boys, aged 6-15 years. The stimuli of small (1°) and large (12°) radius were presented under high (100%) and low (1%) contrast conditions. Social Responsiveness Scale and Sensory Profile Questionnaire were used to assess the autism severity and sensory processing abnormalities. We found that the SS index was atypically reduced, while SF index abnormally enhanced in children with ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD.
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Compatibility of motion facilitates visuomotor synchronization.
Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L
2010-12-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Hurtado, Esteban; Haye, Andrés; González, Ramiro; Manes, Facundo; Ibáñez, Agustiń
2009-06-26
Several event related potential (ERP) studies have investigated the time course of different aspects of evaluative processing in social bias research. Various reports suggest that the late positive potential (LPP) is modulated by basic evaluative processes, and some reports suggest that in-/outgroup relative position affects ERP responses. In order to study possible LPP blending between facial race processing and semantic valence (positive or negative words), we recorded ERPs while indigenous and non-indigenous participants who were matched by age and gender performed an implicit association test (IAT). The task involved categorizing faces (ingroup and outgroup) and words (positive and negative). Since our paradigm implies an evaluative task with positive and negative valence association, a frontal distribution of LPPs similar to that found in previous reports was expected. At the same time, we predicted that LPP valence lateralization would be modulated not only by positive/negative associations but also by particular combinations of valence, face stimuli and participant relative position. Results showed that, during an IAT, indigenous participants with greater behavioral ingroup bias displayed a frontal LPP that was modulated in terms of complex contextual associations involving ethnic group and valence. The LPP was lateralized to the right for negative valence stimuli and to the left for positive valence stimuli. This valence lateralization was influenced by the combination of valence and membership type relevant to compatibility with prejudice toward a minority. Behavioral data from the IAT and an explicit attitudes questionnaire were used to clarify this finding and showed that ingroup bias plays an important role. Both ingroup favoritism and indigenous/non-indigenous differences were consistently present in the data. Our results suggest that frontal LPP is elicited by contextual blending of evaluative judgments of in-/outgroup information and positive vs. negative valence association and confirm recent research relating in-/outgroup ERP modulation and frontal LPP. LPP modulation may cohere with implicit measures of attitudes. The convergence of measures that were observed supports the idea that racial and valence evaluations are strongly influenced by context. This result adds to a growing set of evidence concerning contextual sensitivity of different measures of prejudice.
Attentional load modulates responses of human primary visual cortex to invisible stimuli.
Bahrami, Bahador; Lavie, Nilli; Rees, Geraint
2007-03-20
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
ERIC Educational Resources Information Center
Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.
2010-01-01
Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…
Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search
ERIC Educational Resources Information Center
Geringswald, Franziska; Pollmann, Stefan
2015-01-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Brosowsky, Nicholaus P; Crump, Matthew J C
2016-08-01
Recent work suggests that environmental cues associated with previous attentional control settings can rapidly and involuntarily adjust attentional priorities. The current study tests predictions from adaptive-learning and memory-based theories of contextual control about the role of intentions for setting attentional priorities. To extend the empirical boundaries of contextual control phenomena, and to determine whether theoretical principles of contextual control are generalizable we used a novel bi-dimensional stimulus sampling task. Subjects viewed briefly presented arrays of letters and colors presented above or below fixation, and identified specific stimuli according to a dimensional (letter or color) and positional cue. Location was predictive of the cued dimension, but not the position or identity. In contrast to previous findings, contextual control failed to develop through automatic, adaptive-learning processes. Instead, previous experience with intentionally changing attentional sampling priorities between different contexts was required for contextual control to develop. Copyright © 2016 Elsevier Inc. All rights reserved.
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Thiessen, Amber; Brown, Jessica; Beukelman, David; Hux, Karen
2017-09-01
Photographs are a frequently employed tool for the rehabilitation of adults with traumatic brain injury (TBI). Speech-language pathologists (SLPs) working with these individuals must select photos that are easily identifiable and meaningful to their clients. In this investigation, we examined the visual attention response to camera- (i.e., depicted human figure looking toward camera) and task-engaged (i.e., depicted human figure looking at and touching an object) contextual photographs for a group of adults with TBI and a group of adults without neurological conditions. Eye-tracking technology served to accurately and objectively measure visual fixations. Although differences were hypothesized given the cognitive deficits associated with TBI, study results revealed little difference in the visual fixation patterns of adults with and without TBI. Specifically, both groups of participants tended to fixate rapidly on the depicted human figure and fixate more on objects in which a human figure was task-engaged than when a human figure was camera-engaged. These results indicate that strategic placement of human figures in a contextual photograph may modify the way in which individuals with TBI visually attend to and interpret photographs. In addition, task-engagement appears to have a guiding effect on visual attention that may be of benefit to SLPs hoping to select more effective contextual photographs for their clients with TBI. Finally, the limited differences in visual attention patterns between individuals with TBI and their age and gender matched peers without neurological impairments indicates that these two groups find similar photograph regions to be worthy of visual fixation. Readers will gain knowledge regarding the photograph selection process for individuals with TBI. In addition, readers will be able to identify camera- and task-engaged photographs and to explain why task-engagement may be a beneficial component of contextual photographs. Copyright © 2017 Elsevier Inc. All rights reserved.
Ricciardelli, Paola; Lugli, Luisa; Pellicano, Antonello; Iani, Cristina; Nicoletti, Roberto
2016-01-01
In three experiments, we tested whether the amount of attentional resources needed to process a face displaying neutral/angry/fearful facial expressions with direct or averted gaze depends on task instructions, and face presentation. To this end, we used a Rapid Serial Visual Presentation paradigm in which participants in Experiment 1 were first explicitly asked to discriminate whether the expression of a target face (T1) with direct or averted gaze was angry or neutral, and then to judge the orientation of a landscape (T2). Experiment 2 was identical to Experiment 1 except that participants had to discriminate the gender of the face of T1 and fearful faces were also presented randomly inter-mixed within each block of trials. Experiment 3 differed from Experiment 2 only because angry and fearful faces were never presented within the same block. The findings indicated that the presence of the attentional blink (AB) for face stimuli depends on specific combinations of gaze direction and emotional facial expressions and crucially revealed that the contextual factors (e.g., explicit instruction to process the facial expression and the presence of other emotional faces) can modify and even reverse the AB, suggesting a flexible and more contextualized deployment of attentional resources in face processing. PMID:26898473
Audiovisual perceptual learning with multiple speakers.
Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J
2016-05-01
One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.
Children's Understanding of Globes as a Model of the Earth: A Problem of Contextualizing
ERIC Educational Resources Information Center
Ehrlen, Karin
2008-01-01
Visual representations play an important role in science teaching. The way in which visual representations may help children to acquire scientific concepts is a crucial test in the debate between constructivist and socio-cultural oriented researchers. In this paper, the question is addressed as a problem of how to contextualize conceptions and…
Young Children's Visual Attention to Environmental Print as Measured by Eye Tracker Analysis
ERIC Educational Resources Information Center
Neumann, Michelle M.; Acosta, Camillia; Neumann, David L.
2014-01-01
Environmental print, such as signs and product labels, consist of both print and contextual cues designed to attract the visual attention of the reader. However, contextual cues may draw young children's attention away from the print, thus questioning the value of environmental print in early reading development. Eye tracker technology was used to…
Geyer, Thomas; Shi, Zhuanghua; Müller, Hermann J
2010-06-01
Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times (RTs) were faster when all items in the display appeared at predictive ("old") relative to nonpredictive ("new") locations. However, this RT benefit was smaller compared to when only one set of items, namely that sharing the target's color (but not that in the alternative color) appeared in predictive arrangement. In all conditions, contextual cueing was reliable on both target-present and -absent trials and enhanced if a predictive display was preceded by a predictive (though differently arranged) display, rather than a nonpredictive display. These results suggest that (1) contextual cueing is confined to color subsets of items, that (2) retrieving contextual associations for one color subset of items can be impeded by associations formed within the alternative subset ("contextual interference"), and (3) that contextual cueing is modulated by intertrial priming.
Contextual cost: when a visual-search target is not where it should be.
Makovski, Tal; Jiang, Yuhong V
2010-02-01
Visual search is often facilitated when the search display occasionally repeats, revealing a contextual-cueing effect. According to the associative-learning account, contextual cueing arises from associating the display configuration with the target location. However, recent findings emphasizing the importance of local context near the target have given rise to the possibility that low-level repetition priming may account for the contextual-cueing effect. This study distinguishes associative learning from local repetition priming by testing whether search is directed toward a target's expected location, even when the target is relocated. After participants searched for a T among Ls in displays that repeated 24 times, they completed a transfer session where the target was relocated locally to a previously blank location (Experiment 1) or to an adjacent distractor location (Experiment 2). Results revealed that contextual cueing decreased as the target appeared farther away from its expected location, ultimately resulting in a contextual cost when the target swapped locations with a local distractor. We conclude that target predictability is a key factor in contextual cueing.
Here Today, Gone Tomorrow – Adaptation to Change in Memory-Guided Visual Search
Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J.
2013-01-01
Visual search for a target object can be facilitated by the repeated presentation of an invariant configuration of nontargets (‘contextual cueing’). Here, we tested adaptation of learned contextual associations after a sudden, but permanent, relocation of the target. After an initial learning phase targets were relocated within their invariant contexts and repeatedly presented at new locations, before they returned to the initial locations. Contextual cueing for relocated targets was neither observed after numerous presentations nor after insertion of an overnight break. Further experiments investigated whether learning of additional, previously unseen context-target configurations is comparable to adaptation of existing contextual associations to change. In contrast to the lack of adaptation to changed target locations, contextual cueing developed for additional invariant configurations under identical training conditions. Moreover, across all experiments, presenting relocated targets or additional contexts did not interfere with contextual cueing of initially learned invariant configurations. Overall, the adaptation of contextual memory to changed target locations was severely constrained and unsuccessful in comparison to learning of an additional set of contexts, which suggests that contextual cueing facilitates search for only one repeated target location. PMID:23555038
Global Repetition Influences Contextual Cueing
Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Li, Hong
2018-01-01
Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1–4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect. PMID:29636716
Global Repetition Influences Contextual Cueing.
Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Assumpção, Leonardo; Li, Hong
2018-01-01
Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1-4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect.
Contextual diversity is a main determinant of word identification times in young readers.
Perea, Manuel; Soares, Ana Paula; Comesaña, Montserrat
2013-09-01
Recent research with college-aged skilled readers by Adelman and colleagues revealed that contextual diversity (i.e., the number of contexts in which a word appears) is a more critical determinant of visual word recognition than mere repeated exposure (i.e., word frequency) (Psychological Science, 2006, Vol. 17, pp. 814-823). Given that contextual diversity has been claimed to be a relevant factor to word acquisition in developing readers, the effects of contextual diversity should also be a main determinant of word identification times in developing readers. A lexical decision experiment was conducted to examine the effects of contextual diversity and word frequency in young readers (children in fourth grade). Results revealed a sizable effect of contextual diversity, but not of word frequency, thereby generalizing Adelman and colleagues' data to a child population. These findings call for the implementation of dynamic developmental models of visual word recognition that go beyond a learning rule by mere exposure. Copyright © 2012 Elsevier Inc. All rights reserved.
Visual memories for perceived length are well preserved in older adults.
Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N
2011-09-15
Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
Managing Spatial Selections With Contextual Snapshots
Mindek, P; Gröller, M E; Bruckner, S
2014-01-01
Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analysed and compared in different views. However, the semantics of such selections often depend on specific parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can also be used as the basis for interactive integrated and linked views, which enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data, the analysis of historical documents and the display of anatomical data. PMID:25821284
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
On the role of working memory in spatial contextual cueing.
Travis, Susan L; Mattingley, Jason B; Dux, Paul E
2013-01-01
The human visual system receives more information than can be consciously processed. To overcome this capacity limit, we employ attentional mechanisms to prioritize task-relevant (target) information over less relevant (distractor) information. Regularities in the environment can facilitate the allocation of attention, as demonstrated by the spatial contextual cueing paradigm. When observers are exposed repeatedly to a scene and invariant distractor information, learning from earlier exposures enhances the search for the target. Here, we investigated whether spatial contextual cueing draws on spatial working memory resources and, if so, at what level of processing working memory load has its effect. Participants performed 2 tasks concurrently: a visual search task, in which the spatial configuration of some search arrays occasionally repeated, and a spatial working memory task. Increases in working memory load significantly impaired contextual learning. These findings indicate that spatial contextual cueing utilizes working memory resources.
Zold, Camila L.
2015-01-01
The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643
Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne
2017-04-01
We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.
Whole-brain activity mapping onto a zebrafish brain atlas.
Randlett, Owen; Wee, Caroline L; Naumann, Eva A; Nnaemeka, Onyeka; Schoppik, David; Fitzgerald, James E; Portugues, Ruben; Lacoste, Alix M B; Riegler, Clemens; Engert, Florian; Schier, Alexander F
2015-11-01
In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open-source atlas containing molecular labels and definitions of anatomical regions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated extracellular signal–regulated kinase (ERK) as a readout of neural activity, we have developed a system to create and contextualize whole-brain maps of stimulus- and behavior-dependent neural activity. This mitogen-activated protein kinase (MAP)-mapping assay is technically simple, and data analysis is completely automated. Because MAP-mapping is performed on freely swimming fish, it is applicable to studies of nearly any stimulus or behavior. Here we demonstrate our high-throughput approach using pharmacological, visual and noxious stimuli, as well as hunting and feeding. The resultant maps outline hundreds of areas associated with behaviors.
De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T
2012-02-08
Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.
Kasper, Ryan W; Grafton, Scott T; Eckstein, Miguel P; Giesbrecht, Barry
2015-03-01
Visual search can be facilitated by the learning of spatial configurations that predict the location of a target among distractors. Neuropsychological and functional magnetic resonance imaging (fMRI) evidence implicates the medial temporal lobe (MTL) memory system in this contextual cueing effect, and electroencephalography (EEG) studies have identified the involvement of visual cortical regions related to attention. This work investigated two questions: (1) how memory and attention systems are related in contextual cueing; and (2) how these systems are involved in both short- and long-term contextual learning. In one session, EEG and fMRI data were acquired simultaneously in a contextual cueing task. In a second session conducted 1 week later, EEG data were recorded in isolation. The fMRI results revealed MTL contextual modulations that were correlated with short- and long-term behavioral context enhancements and attention-related effects measured with EEG. An fMRI-seeded EEG source analysis revealed that the MTL contributed the most variance to the variability in the attention enhancements measured with EEG. These results support the notion that memory and attention systems interact to facilitate search when spatial context is implicitly learned. © 2015 New York Academy of Sciences.
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
ERIC Educational Resources Information Center
Huang, Tsung-Ren; Grossberg, Stephen
2010-01-01
How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…
ERIC Educational Resources Information Center
Beesley, Tom; Hanafi, Gunadi; Vadillo, Miguel A.; Shanks, David R.; Livesey, Evan J.
2018-01-01
Two experiments examined biases in selective attention during contextual cuing of visual search. When participants were instructed to search for a target of a particular color, overt attention (as measured by the location of fixations) was biased strongly toward distractors presented in that same color. However, when participants searched for…
Top-down contextual knowledge guides visual attention in infancy.
Tummeltshammer, Kristen; Amso, Dima
2017-10-26
The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.
Contextual behavior and neural circuits
Lee, Inah; Lee, Choong-Hee
2013-01-01
Animals including humans engage in goal-directed behavior flexibly in response to items and their background, which is called contextual behavior in this review. Although the concept of context has long been studied, there are differences among researchers in defining and experimenting with the concept. The current review aims to provide a categorical framework within which not only the neural mechanisms of contextual information processing but also the contextual behavior can be studied in more concrete ways. For this purpose, we categorize contextual behavior into three subcategories as follows by considering the types of interactions among context, item, and response: contextual response selection, contextual item selection, and contextual item–response selection. Contextual response selection refers to the animal emitting different types of responses to the same item depending on the context in the background. Contextual item selection occurs when there are multiple items that need to be chosen in a contextual manner. Finally, when multiple items and multiple contexts are involved, contextual item–response selection takes place whereby the animal either chooses an item or inhibits such a response depending on item–context paired association. The literature suggests that the rhinal cortical regions and the hippocampal formation play key roles in mnemonically categorizing and recognizing contextual representations and the associated items. In addition, it appears that the fronto-striatal cortical loops in connection with the contextual information-processing areas critically control the flexible deployment of adaptive action sets and motor responses for maximizing goals. We suggest that contextual information processing should be investigated in experimental settings where contextual stimuli and resulting behaviors are clearly defined and measurable, considering the dynamic top-down and bottom-up interactions among the neural systems for contextual behavior. PMID:23675321
Relativistic compression and expansion of experiential time in the left and right space.
Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano
2008-03-05
Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.
Object representations in visual memory: evidence from visual illusions.
Ben-Shalom, Asaf; Ganel, Tzvi
2012-07-26
Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Examining the cognitive demands of analogy instructions compared to explicit instructions.
Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich
2016-10-01
In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.
Postural time-to-contact as a precursor of visually induced motion sickness.
Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A
2018-06-01
The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
Visual field asymmetries in visual evoked responses
Hagler, Donald J.
2014-01-01
Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151
Correa-Jaraba, Kenia S.; Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2016-01-01
The main aim of this study was to examine the effects of aging on event-related brain potentials (ERPs) associated with the automatic detection of unattended infrequent deviant and novel auditory stimuli (Mismatch Negativity, MMN) and with the orienting to these stimuli (P3a component), as well as the effects on ERPs associated with reorienting to relevant visual stimuli (Reorienting Negativity, RON). Participants were divided into three age groups: (1) Young: 21–29 years old; (2) Middle-aged: 51–64 years old; and (3) Old: 65–84 years old. They performed an auditory-visual distraction-attention task in which they were asked to attend to visual stimuli (Go, NoGo) and to ignore auditory stimuli (S: standard, D: deviant, N: novel). Reaction times (RTs) to Go visual stimuli were longer in old and middle-aged than in young participants. In addition, in all three age groups, longer RTs were found when Go visual stimuli were preceded by novel relative to deviant and standard auditory stimuli, indicating a distraction effect provoked by novel stimuli. ERP components were identified in the Novel minus Standard (N-S) and Deviant minus Standard (D-S) difference waveforms. In the N-S condition, MMN latency was significantly longer in middle-aged and old participants than in young participants, indicating a slowing of automatic detection of changes. The following results were observed in both difference waveforms: (1) the P3a component comprised two consecutive phases in all three age groups—an early-P3a (e-P3a) that may reflect the orienting response toward the irrelevant stimulation and a late-P3a (l-P3a) that may be a correlate of subsequent evaluation of the infrequent unexpected novel or deviant stimuli; (2) the e-P3a, l-P3a, and RON latencies were significantly longer in the Middle-aged and Old groups than in the Young group, indicating delay in the orienting response to and the subsequent evaluation of unattended auditory stimuli, and in the reorienting of attention to relevant (Go) visual stimuli, respectively; and (3) a significantly smaller e-P3a amplitude in Middle-aged and Old groups, indicating a deficit in the orienting response to irrelevant novel and deviant auditory stimuli. PMID:27065004
Residual attention guidance in blindsight monkeys watching complex natural scenes.
Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi
2012-08-07
Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.
Long-term adaptation to change in implicit contextual learning.
Zellin, Martina; von Mühlenen, Adrian; Müller, Hermann J; Conci, Markus
2014-08-01
The visual world consists of spatial regularities that are acquired through experience in order to guide attentional orienting. For instance, in visual search, detection of a target is faster when a layout of nontarget items is encountered repeatedly, suggesting that learned contextual associations can guide attention (contextual cuing). However, scene layouts sometimes change, requiring observers to adapt previous memory representations. Here, we investigated the long-term dynamics of contextual adaptation after a permanent change of the target location. We observed fast and reliable learning of initial context-target associations after just three repetitions. However, adaptation of acquired contextual representations to relocated targets was slow and effortful, requiring 3 days of training with overall 80 repetitions. A final test 1 week later revealed equivalent effects of contextual cuing for both target locations, and these were comparable to the effects observed on day 1. That is, observers learned both initial target locations and relocated targets, given extensive training combined with extended periods of consolidation. Thus, while implicit contextual learning efficiently extracts statistical regularities of our environment at first, it is rather insensitive to change in the longer term, especially when subtle changes in context-target associations need to be acquired.
Endogenous Sequential Cortical Activity Evoked by Visual Stimuli
Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael
2015-01-01
Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915
Altered salience processing in attention deficit hyperactivity disorder.
Tegelbeckers, Jana; Bunzeck, Nico; Duzel, Emrah; Bonath, Björn; Flechtner, Hans-Henning; Krauel, Kerstin
2015-06-01
Attentional problems in patients with attention deficit hyperactivity disorder (ADHD) have often been linked with deficits in cognitive control. Whether these deficits are associated with increased sensitivity to external salient stimuli remains unclear. To address this issue, we acquired functional brain images (fMRI) in 38 boys with and without ADHD (age: 11-16 years). To differentiate the effects of item novelty, contextual rareness and task relevance, participants performed a visual oddball task including four stimulus categories: a frequent standard picture (62.5%), unique novel pictures (12.5%), one repeated rare picture (12.5%), and a target picture (12.5%) that required a specific motor response. As a main finding, we can show considerable overlap in novelty-related BOLD responses between both groups, but only healthy participants showed neural deactivation in temporal as well as frontal regions in response to novel pictures. Furthermore, only ADHD patients, but not healthy controls, engaged wide parts of the novelty network when processing the rare but familiar picture. Our results provide first evidence that ADHD patients show enhanced neural activity in response to novel but behaviorally irrelevant stimuli as well as reduced habituation to familiar items. These findings suggest an inefficient use of neuronal resources in children with ADHD that could be closely linked to increased distractibility. © 2015 Wiley Periodicals, Inc.
Memory for time and place contributes to enhanced confidence in memories for emotional events
Rimmele, Ulrike; Davachi, Lila; Phelps, Elizabeth A.
2012-01-01
Emotion strengthens the subjective sense of remembering. However, these confidently remembered emotional memories have not been found be more accurate for some types of contextual details. We investigated whether the subjective sense of recollecting negative stimuli is coupled with enhanced memory accuracy for three specific types of central contextual details using the remember/know paradigm and confidence ratings. Our results indicate that the subjective sense of remembering is indeed coupled with better recollection of spatial location and temporal context. In contrast, we found a double-dissociation between the subjective sense of remembering and memory accuracy for colored dots placed in the conceptual center of negative and neutral scenes. These findings show that the enhanced subjective recollective experience for negative stimuli reliably indicates objective recollection for spatial location and temporal context, but not for other types of details, whereas for neutral stimuli, the subjective sense of remembering is coupled with all the types of details assessed. Translating this finding to flashbulb memories, we found that, over time, more participants correctly remembered the location where they learned about the terrorist attacks on 9/11 than any other canonical feature. Likewise participants’ confidence was higher in their memory for location vs. other canonical features. These findings indicate that the strong recollective experience of a negative event corresponds to an accurate memory for some kinds of contextual details, but not other kinds. This discrepancy provides further evidence that the subjective sense of remembering negative events is driven by a different mechanism than the subjective sense of remembering neutral events. PMID:22642353
Neural representations of contextual guidance in visual search of real-world scenes.
Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P
2013-05-01
Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Compatibility of Motion Facilitates Visuomotor Synchronization
ERIC Educational Resources Information Center
Hove, Michael J.; Spivey, Michael J.; Krumhansl, Carol L.
2010-01-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1,…
Contrast effects on stop consonant identification.
Diehl, R L; Elman, J L; McCusker, S B
1978-11-01
Changes in the identification of speech sounds following selective adaptation are usually attributed to a reduction in sensitivity of auditory feature detectors. An alternative explanation of these effects is based on the notion of response contrast. In several experiments, subjects identified the initial segment of synthetic consonant-vowel syllables as either the voiced stop [b] or the voiceless stop [ph]. Each test syllable had a value of voice onset time (VOT) that placed it near the English voiced-voiceless boundary. When the test syllables were preceded by a single clear [b] (VOT = -100 msec), subjects tended to identify them as [ph], whereas when they were preceded by an unambiguous [ph] (VOT = 100 msec), the syllables were predominantly labeled [b]. This contrast effect occurred even when the contextual stimuli were velar and the test stimuli were bilabial, which suggests a featural rather than a phonemic basis for the effect. To discount the possibility that these might be instances of single-trial sensory adaptation, we conducted a similar experiment in which the contextual stimuli followed the test items. Reliable contrast effects were still obtained. In view of these results, it appears likely that response contrast accounts for at least some component of the adaptation effects reported in the literature.
Flexible attention deployment in threatening contexts: an instructed fear conditioning study.
Shechner, Tomer; Pelc, Tatiana; Pine, Daniel S; Fox, Nathan A; Bar-Haim, Yair
2012-10-01
Factors leading humans to shift attention away from danger cues remain poorly understood. Two laboratory experiments reported here show that context interacts with learning experiences to shape attention avoidance of mild danger cues. The first experiment exposed 18 participants to contextual threat of electric shock. Attention allocation to mild danger cues was then assessed with the dot-probe task. Results showed that contextual threat caused subjects to avert attention from danger cues. In the second experiment, 36 participants were conditioned to the same contextual threat used in Experiment 1. These subjects then were randomly assigned to either an experimental group, trained to shift attention toward danger cues, or a placebo group exposed to the same stimuli without the training component. As in Experiment 1, contextual threat again caused attention allocation away from danger in the control group. However, this did not occur in the experimental group. These experiments show that acute contextual threat and learning experiences interact to shape the deployment of attention away from danger cues.
Neural Correlates of Contextual Cueing Are Modulated by Explicit Learning
ERIC Educational Resources Information Center
Westerberg, Carmen E.; Miller, Brennan B.; Reber, Paul J.; Cohen, Neal J.; Paller, Ken A.
2011-01-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be…
Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.
Alexander, Gerianne M; Charles, Nora
2009-06-01
An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.
Entourage: Visualizing Relationships between Biological Pathways using Contextual Subsets
Lex, Alexander; Partl, Christian; Kalkofen, Denis; Streit, Marc; Gratzl, Samuel; Wassermann, Anne Mai; Schmalstieg, Dieter; Pfister, Hanspeter
2014-01-01
Biological pathway maps are highly relevant tools for many tasks in molecular biology. They reduce the complexity of the overall biological network by partitioning it into smaller manageable parts. While this reduction of complexity is their biggest strength, it is, at the same time, their biggest weakness. By removing what is deemed not important for the primary function of the pathway, biologists lose the ability to follow and understand cross-talks between pathways. Considering these cross-talks is, however, critical in many analysis scenarios, such as judging effects of drugs. In this paper we introduce Entourage, a novel visualization technique that provides contextual information lost due to the artificial partitioning of the biological network, but at the same time limits the presented information to what is relevant to the analyst’s task. We use one pathway map as the focus of an analysis and allow a larger set of contextual pathways. For these context pathways we only show the contextual subsets, i.e., the parts of the graph that are relevant to a selection. Entourage suggests related pathways based on similarities and highlights parts of a pathway that are interesting in terms of mapped experimental data. We visualize interdependencies between pathways using stubs of visual links, which we found effective yet not obtrusive. By combining this approach with visualization of experimental data, we can provide domain experts with a highly valuable tool. We demonstrate the utility of Entourage with case studies conducted with a biochemist who researches the effects of drugs on pathways. We show that the technique is well suited to investigate interdependencies between pathways and to analyze, understand, and predict the effect that drugs have on different cell types. Fig. 1Entourage showing the Glioma pathway in detail and contextual information of multiple related pathways. PMID:24051820
A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.
Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei
2014-09-19
Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
2011-01-01
Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
Cortical Integration of Audio-Visual Information
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
2013-01-01
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Manginelli, Angela A; Baumgartner, Florian; Pollmann, Stefan
2013-02-15
Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) for keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in the temporo-parietal junction area, previously associated with the capture of attention by explicitly retrieved memory items, and in the ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory. Copyright © 2012 Elsevier Inc. All rights reserved.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
ERIC Educational Resources Information Center
Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn
2011-01-01
Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…
Sex Differences in Response to Visual Sexual Stimuli: A Review
Rupp, Heather A.; Wallen, Kim
2009-01-01
This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311
Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images
Funahashi, Shintaro
2016-01-01
Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Differences in apparent straightness of dot and line stimuli.
NASA Technical Reports Server (NTRS)
Parlee, M. B.
1972-01-01
An investigation has been made of anisotropic responses to contoured and noncontoured stimuli to obtain an insight into the way these stimuli are processed. For this purpose, eight subjects judged the alignment of minimally contoured (3 dot) and contoured (line) stimuli. Stimuli, presented to each eye separately, vertically subtended either 8 or 32 deg visual angle and were located 10 deg left, center, or 10 deg right in the visual field. Location-dependent deviations from physical straightness were larger for dot stimuli than for lines. The results were the same for the two eyes. In a second experiment, subjects judged the alignment of stimuli composed of different densities of dots. Apparent straightness for these stimuli was the same as for lines. The results are discussed in terms of alternative mechanisms for analysis of contoured and minimally contoured stimuli.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Sevinc, Gunes; Spreng, R Nathan
2014-01-01
Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience.
Sevinc, Gunes; Spreng, R. Nathan
2014-01-01
Background and Objectives Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. Data Source A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results & Conclusions Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience. PMID:24503959
Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice
2018-02-01
Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.
A Hierarchical and Contextual Model for Learning and Recognizing Highly Variant Visual Categories
2010-01-01
neighboring pattern primitives, to create our model. We also present a minimax entropy framework for automatically learning which contextual constraints are...Grammars . . . . . . . . . . . . . . . . . . 19 3.2 Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Creating a Contextual...Compositional Boosting. . . . . 119 7.8 Top-down hallucinations of missing objects. . . . . . . . . . . . . . . 121 7.9 The bottom-up to top-down
NASA Astrophysics Data System (ADS)
Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.
Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza
2016-05-01
Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.
Kokubu, Masahiro; Ando, Soichi; Oda, Shingo
2018-01-18
The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro
2013-01-01
Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.
Anxiety and the interpretation of ambiguous information: beyond the emotion-congruent effect.
Blanchette, Isabelle; Richards, Anne
2003-06-01
The authors investigated how anxiety influences the use of contextual information in the resolution of ambiguity. Participants heard ambiguous homophones (threat/neutral, positive/neural, and neutral/neutral) with related contextual information. State anxiety was manipulated experimentally. The interpretations of anxious participants were influenced by context to a greater extent than those of control participants. Some mood-incongruent effects were observed where anxious participants were more likely to adopt neutral interpretations of potentially threatening stimuli. Effects were observed in a spelling task (Experiments 1 and 2) and in a lexical decision task (Experiment 3), with supraliminal, and subliminal presentation of contextual cues, and with 2 different anxiety-induction procedures. Results show how anxiety affects both the content and the process of resolution of ambiguity.
Integrative Properties of the Pe1 Neuron, a Unique Mushroom Body Output Neuron
Rybak, Jürgen; Menzel, Randolf
1998-01-01
A mushroom body extrinsic neuron, the Pe1 neuron, connects the peduncle of the mushroom body (MB) with two areas of the protocerebrum in the honeybee brain, the lateral protocerebral lobe (LPL) and the ring neuropil around the α-lobe. Each side of the bee brain contains only one Pe1 neuron. Using a combination of intracellular recording and neuroanatomical techniques we analyzed its properties of integrative processing of the different sensory modalities. The Pe1 neuron responds to visual, mechanosensory, and olfactory stimuli. The responses are broadly tuned, consisting of a sustained increase of spike frequency to the onset and offset of light flashes, to horizontal and vertical movements of extended objects, to mechanical stimuli applied to the antennae or mouth parts, and to all olfactory stimuli tested (29 chemicals). These multisensory properties are reflected in its dendritic organization. Serial reconstructions of intracellularly stained Pe1 neurons using confocal microscopy reveal that the Pe1 neuron arborizes throughout all layers of MB peduncle with finger-like, vertically oriented dendrites. The peduncle of the MB is formed by the axons of Kenyon cells, whose dendritic inputs are organized in modality-specific subcompartments of the calyx region. The peduncular arborization indicates that the Pe1 neuron receives input from Kenyon cells of all calycal subcompartments. Because the Pe1 neuron changes its odor responses transiently as a consequence of olfactory learning, we hypothesize that the multimodal response properties might have a role in memory consolidation and help to establish contextual references in the long-term trace. PMID:10454378
Gottschalk, Caroline; Fischer, Rico
2017-03-01
Different contexts with high versus low conflict frequencies require a specific attentional control involvement, i.e., strong attentional control for high conflict contexts and less attentional control for low conflict contexts. While it is assumed that the corresponding control set can be activated upon stimulus presentation at the respective context (e.g., upper versus lower location), the actual features that trigger control set activation are to date not described. Here, we ask whether the perceptual priming of the location context by an abrupt onset of irrelevant stimuli is sufficient in activating the context-specific attentional control set. For example, the mere onset of a stimulus might disambiguate the relevant location context and thus, serve as a low-level perceptual trigger mechanism that activates the context-specific attentional control set. In Experiment 1 and 2, the onsets of task-relevant and task-irrelevant (distracter) stimuli were manipulated at each context location to compete for triggering the activation of the appropriate control set. In Experiment 3, a prior training session enabled distracter stimuli to establish contextual control associations of their own before entering the test session. Results consistently showed that the mere onset of a task-irrelevant stimulus (with or without a context-control association) is not sufficient to activate the context-associated attentional control set by disambiguating the relevant context location. Instead, we argue that the identification of the relevant stimulus at the respective context is a precondition to trigger the activation of the context-associated attentional control set.
Sysoeva, Olga V.; Galuta, Ilia A.; Davletshina, Maria S.; Orekhova, Elena V.; Stroganova, Tatiana A.
2017-01-01
Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD—a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 < IQ < 127) and 44 typically developing (TD) boys, aged 6–15 years. The stimuli of small (1°) and large (12°) radius were presented under high (100%) and low (1%) contrast conditions. Social Responsiveness Scale and Sensory Profile Questionnaire were used to assess the autism severity and sensory processing abnormalities. We found that the SS index was atypically reduced, while SF index abnormally enhanced in children with ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD. PMID:28405183
Gawronski, Bertram; Deutsch, Roland; Seidel, Oliver
2005-09-01
Drawing on two alternative accounts of the affective priming effect (spreading activation vs. response interference), the present research investigated the underlying processes of how evaluative context stimuli influence implicit evaluations in the affective priming task. Employing two sequentially presented prime stimuli (rather than a single prime), two experiments showed that affective priming effects elicited by a given prime stimulus were more pronounced when this stimulus was preceded by a context prime of the opposite valence than when it was preceded by a context prime of the same valence. This effect consistently emerged for pictures (Experiment 1) and words (Experiment 2) as prime stimuli. These results suggest that the impact of evaluative context stimuli on implicit evaluations is mediated by contrast effects in the attention to evaluative information rather than by additive effects in the activation of evaluative information in associative memory.
Role of the 5HT3 Receptor in Alcohol Drinking and Aggression Using a Transgenic Mouse Model
2006-09-01
Dissociations in hippocampal 5-hydroxytryptamine release in the rat following Pavlovian aversive conditioning to discrete and contextual stimuli. Eur J...P < 0.05]. B6SJL/F2-OE and C57Bl/6J-OE mice display improved contextual fear conditioning , whereas DBA/2J-OE mice do not. Fear conditioning to...None of the IS groups differed in freezing behavior and are not reported here. Transgene presence improved conditioning on B6SJL/F2 and C57Bl/6J
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Shades of yellow: interactive effects of visual and odour cues in a pest beetle
Stevenson, Philip C.; Belmain, Steven R.
2016-01-01
Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707
Dissociation of neural mechanisms underlying orientation processing in humans
Ling, Sam; Pearson, Joel; Blake, Randolph
2009-01-01
Summary Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and discovery of that property [1, 2] dramatically shaped how we conceptualize visual processing [3–6]. However, much remains unknown about the neural substrates of these basic building blocks of perception, and what is known primarily stems from animal physiology studies. To probe the neural concomitants of orientation processing in humans, we employed repetitive transcranial magnetic stimulation (rTMS) to attenuate neural responses evoked by stimuli presented within a local region of the visual field. Previous physiological studies have shown that rTMS can significantly suppress the neuronal spiking activity, hemodynamic responses, and local field potentials within a focused cortical region [7, 8]. By suppressing neural activity with rTMS, we were able to dissociate components of the neural circuitry underlying two distinct aspects of orientation processing: selectivity and contextual effects. Orientation selectivity gauged by masking was unchanged by rTMS, whereas an otherwise robust orientation repulsion illusion was weakened following rTMS. This dissociation implies that orientation processing relies on distinct mechanisms, only one of which was impacted by rTMS. These results are consistent with models positing that orientation selectivity is largely governed by the patterns of convergence of thalamic afferents onto cortical neurons, with intracortical activity then shaping population responses contained within those orientation-selective cortical neurons. PMID:19682905
Virtual reality stimuli for force platform posturography.
Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko
2002-01-01
People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.
Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.
Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J
2017-01-01
We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
ERIC Educational Resources Information Center
Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert
2012-01-01
The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…
ERIC Educational Resources Information Center
Guo, Jing; McLeod, Poppy Lauretta
2014-01-01
Drawing upon the Search for Ideas in Associative Memory (SIAM) model as the theoretical framework, the impact of heterogeneity and topic relevance of visual stimuli on ideation performance was examined. Results from a laboratory experiment showed that visual stimuli increased productivity and diversity of idea generation, that relevance to the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-25
.... Acoustic and visual stimuli generated by: (1) Helicopter landings/takeoffs; (2) noise generated during... minimize acoustic and visual disturbances) as described in NMFS' December 22, 2010 (75 FR 80471) notice of... Activity on Marine Mammals Acoustic and visual stimuli generated by: (1) Helicopter landings/ takeoffs; (2...
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Iconic-Memory Processing of Unfamiliar Stimuli by Retarded and Nonretarded Individuals.
ERIC Educational Resources Information Center
Hornstein, Henry A.; Mosley, James L.
1979-01-01
The iconic-memory processing of unfamiliar stimuli by 11 mentally retarded males (mean age 22 years) was undertaken employing a visually cued partial-report procedure and a visual masking procedure. (Author/CL)
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
Song, Jae-Jin; Lee, Hyo-Jeong; Kang, Hyejin; Lee, Dong Soo; Chang, Sun O; Oh, Seung Ha
2015-03-01
While deafness-induced plasticity has been investigated in the visual and auditory domains, not much is known about language processing in audiovisual multimodal environments for patients with restored hearing via cochlear implant (CI) devices. Here, we examined the effect of agreeing or conflicting visual inputs on auditory processing in deaf patients equipped with degraded artificial hearing. Ten post-lingually deafened CI users with good performance, along with matched control subjects, underwent H 2 (15) O-positron emission tomography scans while carrying out a behavioral task requiring the extraction of speech information from unimodal auditory stimuli, bimodal audiovisual congruent stimuli, and incongruent stimuli. Regardless of congruency, the control subjects demonstrated activation of the auditory and visual sensory cortices, as well as the superior temporal sulcus, the classical multisensory integration area, indicating a bottom-up multisensory processing strategy. Compared to CI users, the control subjects exhibited activation of the right ventral premotor-supramarginal pathway. In contrast, CI users activated primarily the visual cortices more in the congruent audiovisual condition than in the null condition. In addition, compared to controls, CI users displayed an activation focus in the right amygdala for congruent audiovisual stimuli. The most notable difference between the two groups was an activation focus in the left inferior frontal gyrus in CI users confronted with incongruent audiovisual stimuli, suggesting top-down cognitive modulation for audiovisual conflict. Correlation analysis revealed that good speech performance was positively correlated with right amygdala activity for the congruent condition, but negatively correlated with bilateral visual cortices regardless of congruency. Taken together these results suggest that for multimodal inputs, cochlear implant users are more vision-reliant when processing congruent stimuli and are disturbed more by visual distractors when confronted with incongruent audiovisual stimuli. To cope with this multimodal conflict, CI users activate the left inferior frontal gyrus to adopt a top-down cognitive modulation pathway, whereas normal hearing individuals primarily adopt a bottom-up strategy.
Raymond, J L; Lisberger, S G
1996-12-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
NASA Technical Reports Server (NTRS)
Raymond, J. L.; Lisberger, S. G.
1996-01-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
Multimodal emotion perception after anterior temporal lobectomy (ATL)
Milesi, Valérie; Cekic, Sezen; Péron, Julie; Frühholz, Sascha; Cristinzio, Chiara; Seeck, Margitta; Grandjean, Didier
2014-01-01
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion. PMID:24839437
Van Hiel, Alain; Pattyn, Sven; Onraet, Emma; Severens, Els
2012-01-01
The present study investigates patterns of event-related brain potentials following the presentation of attitudinal stimuli among political moderates (N = 12) and anarchists (N = 11). We used a modified oddball paradigm to investigate the evaluative inconsistency effect elicited by stimuli embedded in a sequence of contextual stimuli with an opposite valence. Increased late positive potentials (LPPs) of extreme political attitudes were observed. Moreover, this LPP enhancement was larger among anarchists than among moderates, indicating that an extreme political attitude of a moderate differs from an extreme political attitude of an anarchist. The discussion elaborates on the meaning of attitude extremity for moderates and extremists. PMID:21421734
Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study
Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.
2012-01-01
Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014
Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292
Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study
NASA Astrophysics Data System (ADS)
Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.
2015-08-01
Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.
Lateral eye-movement responses to visual stimuli.
Wilbur, M P; Roberts-Wilbur, J
1985-08-01
The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.
Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R
2017-07-01
Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Visual stimuli and written production of deaf signers.
Jacinto, Laís Alves; Ribeiro, Karen Barros; Soares, Aparecido José Couto; Cárnio, Maria Silvia
2012-01-01
To verify the interference of visual stimuli in written production of deaf signers with no complaints regarding reading and writing. The research group consisted of 12 students with education between the 4th and 5th grade of elementary school, with severe or profound sensorineural hearing loss, users of LIBRAS and with alphabetical writing level. The evaluation was performed with pictures in a logical sequence and an action picture. The analysis used the communicative competence criteria. There were no differences in the writing production of the subjects for both stimuli. In all texts there was no title and punctuation, verbs were in the infinitive mode, there was lack of cohesive links and inclusion of created words. The different visual stimuli did not affect the production of texts.
Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age
ERIC Educational Resources Information Center
Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.
2013-01-01
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…
Do You "See'" What I "See"? Differentiation of Visual Action Words
ERIC Educational Resources Information Center
Dickinson, Joël; Cirelli, Laura; Szeligo, Frank
2014-01-01
Dickinson and Szeligo ("Can J Exp Psychol" 62(4):211--222, 2008) found that processing time for simple visual stimuli was affected by the visual action participants had been instructed to perform on these stimuli (e.g., see, distinguish). It was concluded that these effects reflected the differences in the durations of these various…
Neuronal Response Gain Enhancement prior to Microsaccades.
Chen, Chih-Yang; Ignashchenkova, Alla; Thier, Peter; Hafed, Ziad M
2015-08-17
Neuronal response gain enhancement is a classic signature of the allocation of covert visual attention without eye movements. However, microsaccades continuously occur during gaze fixation. Because these tiny eye movements are preceded by motor preparatory signals well before they are triggered, it may be the case that a corollary of such signals may cause enhancement, even without attentional cueing. In six different macaque monkeys and two different brain areas previously implicated in covert visual attention (superior colliculus and frontal eye fields), we show neuronal response gain enhancement for peripheral stimuli appearing immediately before microsaccades. This enhancement occurs both during simple fixation with behaviorally irrelevant peripheral stimuli and when the stimuli are relevant for the subsequent allocation of covert visual attention. Moreover, this enhancement occurs in both purely visual neurons and visual-motor neurons, and it is replaced by suppression for stimuli appearing immediately after microsaccades. Our results suggest that there may be an obligatory link between microsaccade occurrence and peripheral selective processing, even though microsaccades can be orders of magnitude smaller than the eccentricities of peripheral stimuli. Because microsaccades occur in a repetitive manner during fixation, and because these eye movements reset neurophysiological rhythms every time they occur, our results highlight a possible mechanism through which oculomotor events may aid periodic sampling of the visual environment for the benefit of perception, even when gaze is prevented from overtly shifting. One functional consequence of such periodic sampling could be the magnification of rhythmic fluctuations of peripheral covert visual attention. Copyright © 2015 Elsevier Ltd. All rights reserved.
Heightened attentional capture by visual food stimuli in anorexia nervosa.
Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J
2017-08-01
The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Positive mood broadens visual attention to positive stimuli.
Wadlinger, Heather A; Isaacowitz, Derek M
2006-03-01
In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Dorsal hippocampus is necessary for visual categorization in rats.
Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H
2018-02-23
The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.
Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.
Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor
2015-04-01
Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.
Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi
2015-01-01
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
The role of prestimulus activity in visual extinction☆
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-01-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398
The role of prestimulus activity in visual extinction.
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-07-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multiperson visual focus of attention from head pose and meeting contextual cues.
Ba, Sileye O; Odobez, Jean-Marc
2011-01-01
This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Cyr, André; Boukadoum, Mounir
2013-03-01
This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.
Contextual Compression of Large-Scale Wind Turbine Array Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less
ERIC Educational Resources Information Center
Torralba, Antonio; Oliva, Aude; Castelhano, Monica S.; Henderson, John M.
2006-01-01
Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global…
Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex
Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na
2015-01-01
The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
Effect of negative emotions evoked by light, noise and taste on trigeminal thermal sensitivity.
Yang, Guangju; Baad-Hansen, Lene; Wang, Kelun; Xie, Qiu-Fei; Svensson, Peter
2014-11-07
Patients with migraine often have impaired somatosensory function and experience headache attacks triggered by exogenous stimulus, such as light, sound or taste. This study aimed to assess the influence of three controlled conditioning stimuli (visual, auditory and gustatory stimuli and combined stimuli) on affective state and thermal sensitivity in healthy human participants. All participants attended four experimental sessions with visual, auditory and gustatory conditioning stimuli and combination of all stimuli, in a randomized sequence. In each session, the somatosensory sensitivity was tested in the perioral region with use of thermal stimuli with and without the conditioning stimuli. Positive and Negative Affect States (PANAS) were assessed before and after the tests. Subject based ratings of the conditioning and test stimuli in addition to skin temperature and heart rate as indicators of arousal responses were collected in real time during the tests. The three conditioning stimuli all induced significant increases in negative PANAS scores (paired t-test, P ≤0.016). Compared with baseline, the increases were in a near dose-dependent manner during visual and auditory conditioning stimulation. No significant effects of any single conditioning stimuli were observed on trigeminal thermal sensitivity (P ≥0.051) or arousal parameters (P ≥0.057). The effects of combined conditioning stimuli on subjective ratings (P ≤0.038) and negative affect (P = 0.011) were stronger than those of single stimuli. All three conditioning stimuli provided a simple way to evoke a negative affective state without physical arousal or influence on trigeminal thermal sensitivity. Multisensory conditioning had stronger effects but also failed to modulate thermal sensitivity, suggesting that so-called exogenous trigger stimuli e.g. bright light, noise, unpleasant taste in patients with migraine may require a predisposed or sensitized nervous system.
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis
ERIC Educational Resources Information Center
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
2010-01-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
Explaining the Colavita visual dominance effect.
Spence, Charles
2009-01-01
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.
Visual grouping under isoluminant condition: impact of mental fatigue
NASA Astrophysics Data System (ADS)
Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta
2016-09-01
Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.
Dynamics of normalization underlying masking in human visual cortex.
Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M
2012-02-22
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex
Singer, Wolf; Maass, Wolfgang
2009-01-01
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205
[Intermodal timing cues for audio-visual speech recognition].
Hashimoto, Masahiro; Kumashiro, Masaharu
2004-06-01
The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.
The primate amygdala represents the positive and negative value of visual stimuli during learning
Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel
2008-01-01
Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160
Black–white asymmetry in visual perception
Lu, Zhong-Lin; Sperling, George
2012-01-01
With eleven different types of stimuli that exercise a wide gamut of spatial and temporal visual processes, negative perturbations from mean luminance are found to be typically 25% more effective visually than positive perturbations of the same magnitude (range 8–67%). In Experiment 12, the magnitude of the black–white asymmetry is shown to be a saturating function of stimulus contrast. Experiment 13 shows black–white asymmetry primarily involves a nonlinearity in the visual representation of decrements. Black–white asymmetry in early visual processing produces even-harmonic distortion frequencies in all ordinary stimuli and in illusions such as the perceived asymmetry of optically perfect sine wave gratings. In stimuli intended to stimulate exclusively second-order processing in which motion or shape are defined not by luminance differences but by differences in texture contrast, the black–white asymmetry typically generates artifactual luminance (first-order) motion and shape components. Because black–white asymmetry pervades psychophysical and neurophysiological procedures that utilize spatial or temporal variations of luminance, it frequently needs to be considered in the design and evaluation of experiments that involve visual stimuli. Simple procedures to compensate for black–white asymmetry are proposed. PMID:22984221
Influence of auditory and audiovisual stimuli on the right-left prevalence effect.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
2014-01-01
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces
NASA Astrophysics Data System (ADS)
Waytowich, Nicholas R.; Krusienski, Dean J.
2015-06-01
Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.
NASA Astrophysics Data System (ADS)
Ramirez, Joshua; Mann, Virginia
2005-08-01
Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.
Modeling global scene factors in attention
NASA Astrophysics Data System (ADS)
Torralba, Antonio
2003-07-01
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America
Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA
2015-01-01
Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184
The iconography of mourning and its neural correlates: a functional neuroimaging study
Labek, Karin; Berger, Samantha; Buchheim, Anna; Bosch, Julia; Spohrs, Jennifer; Dommes, Lisa; Beschoner, Petra; Stingl, Julia C.
2017-01-01
Abstract The present functional neuroimaging study focuses on the iconography of mourning. A culture-specific pattern of body postures of mourning individuals, mostly suggesting withdrawal, emerged from a survey of visual material. When used in different combinations in stylized drawings in our neuroimaging study, this material activated cortical areas commonly seen in studies of social cognition (temporo-parietal junction, superior temporal gyrus, and inferior temporal lobe), empathy for pain (somatosensory cortex), and loss (precuneus, middle/posterior cingular gyrus). This pattern of activation developed over time. While in the early phases of exposure lower association areas, such as the extrastriate body area, were active, in the late phases activation in parietal and temporal association areas and the prefrontal cortex was more prominent. These findings are consistent with the conventional and contextual character of iconographic material, and further differentiate it from emotionally negatively valenced and high-arousing stimuli. In future studies, this neuroimaging assay may be useful in characterizing interpretive appraisal of material of negative emotional valence. PMID:28449116
Whole-brain activity mapping onto a zebrafish brain atlas
Randlett, Owen; Wee, Caroline L.; Naumann, Eva A.; Nnaemeka, Onyeka; Schoppik, David; Fitzgerald, James E.; Portugues, Ruben; Lacoste, Alix M.B.; Riegler, Clemens; Engert, Florian; Schier, Alexander F.
2015-01-01
In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open source atlas containing molecular labels and anatomical region definitions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated-Extracellular signal-regulated kinase (ERK/MAPK) as a readout of neural activity, we have developed a system to create and contextualize whole brain maps of stimulus- and behavior-dependent neural activity. This MAP-Mapping (Mitogen Activated Protein kinase – Mapping) assay is technically simple, fast, inexpensive, and data analysis is completely automated. Since MAP-Mapping is performed on fish that are freely swimming, it is applicable to nearly any stimulus or behavior. We demonstrate the utility of our high-throughput approach using hunting/feeding, pharmacological, visual and noxious stimuli. The resultant maps outline hundreds of areas associated with behaviors. PMID:26778924
Why do animals differ in their susceptibility to geometrical illusions?
Feng, Lynna C; Chouinard, Philippe A; Howell, Tiffani J; Bennett, Pauleen C
2017-04-01
In humans, geometrical illusions are thought to reflect mechanisms that are usually helpful for seeing the world in a predictable manner. These mechanisms deceive us given the right set of circumstances, correcting visual input where a correction is not necessary. Investigations of non-human animals' susceptibility to geometrical illusions have yielded contradictory results, suggesting that the underlying mechanisms with which animals see the world may differ across species. In this review, we first collate studies showing that different species are susceptible to specific illusions in the same or reverse direction as humans. Based on a careful assessment of these findings, we then propose several ecological and anatomical factors that may affect how a species perceives illusory stimuli. We also consider the usefulness of this information for determining whether sight in different species might be more similar to human sight, being influenced by contextual information, or to how machines process and transmit information as programmed. Future testing in animals could provide new theoretical insights by focusing on establishing dissociations between stimuli that may or may not alter perception in a particular species. This information could improve our understanding of the mechanisms behind illusions, but also provide insight into how sight is subjectively experienced by different animals, and the degree to which vision is innate versus acquired, which is difficult to examine in humans.
Neural responses to salient visual stimuli.
Morris, J S; Friston, K J; Dolan, R J
1997-01-01
The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546
Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B
2018-07-01
Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.
Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam
2011-08-03
Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
Hübers, Annemarie; Kassubek, Jan; Grön, Georg; Gorges, Martin; Aho-Oezhan, Helena; Keller, Jürgen; Horn, Hannah; Neugebauer, Hermann; Uttner, Ingo; Lulé, Dorothée; Ludolph, Albert C
2016-09-01
The syndrome of pathological laughing and crying (PLC) is characterized by episodes of involuntary outbursts of emotional expression. Although this phenomenon has been referred to for over a century, a clear-cut clinical definition is still lacking, and underlying pathophysiological mechanisms are not well understood. In particular, it remains ill-defined which kind of stimuli-contextually appropriate or inappropriate-elicit episodes of PLC, and if the phenomenon is a result of a lack of inhibition from the frontal cortex ("top-down-theory") or due to an altered processing of sensory inputs at the brainstem level ("bottom-up-theory"). To address these questions, we studied ten amyotrophic lateral sclerosis (ALS) patients with PLC and ten controls matched for age, sex and education. Subjects were simultaneously exposed to either emotionally congruent or incongruent visual and auditory stimuli and were asked to rate pictures according to their emotional quality. Changes in physiological parameters (heart rate, galvanic skin response, activity of facial muscles) were recorded, and a standardized self-assessment lability score (CNS-LS) was determined. Patients were influenced in their rating behaviour in a negative direction by mood-incongruent music. Compared to controls, they were influenced by negative stimuli, i.e. they rated neutral pictures more negatively when listening to sad music. Patients rated significantly higher on the CNS-LS. In patients, changes of electromyographic activity of mimic muscles during different emotion-eliciting conditions were explained by frontal cortex dysfunction. We conclude that PLC is associated with altered emotional suggestibility and that it is preferentially elicited by mood-incongruent stimuli. In addition, physiological reactions as well as behavioural changes suggest that this phenomenon is primarily an expression of reduced inhibitory activity of the frontal cortex, since frontal dysfunction could explain changes in physiological parameters in the patient group. We consider these findings being important for the clinical interpretation of emotional reactions of ALS patients.
NASA Astrophysics Data System (ADS)
Pardo, P. J.; Pérez, A. L.; Suero, M. I.
2004-01-01
An old fluorescence spectrophotometer was recycled to make a three-channel colorimeter. The various modifications involved in its design and implementation are described. An optical system was added that allows the fusion of two visual stimuli coming from the two monochromators of the spectrofluorimeter. Each of these stimuli has a wavelength and bandwidth control, and a third visual stimulus may be taken from a monochromator, a cathode ray tube, a thin film transistor screen, or any other light source. This freedom in the choice of source of the third chromatic channel, together with the characteristics of the visual stimuli from the spectrofluorimeter, give this design a great versatility in its application to novel visual experiments on color vision.
Verhoef, Bram-Ernst; Bohon, Kaitlin S.
2015-01-01
Binocular disparity is a powerful depth cue for object perception. The computations for object vision culminate in inferior temporal cortex (IT), but the functional organization for disparity in IT is unknown. Here we addressed this question by measuring fMRI responses in alert monkeys to stimuli that appeared in front of (near), behind (far), or at the fixation plane. We discovered three regions that showed preferential responses for near and far stimuli, relative to zero-disparity stimuli at the fixation plane. These “near/far” disparity-biased regions were located within dorsal IT, as predicted by microelectrode studies, and on the posterior inferotemporal gyrus. In a second analysis, we instead compared responses to near stimuli with responses to far stimuli and discovered a separate network of “near” disparity-biased regions that extended along the crest of the superior temporal sulcus. We also measured in the same animals fMRI responses to faces, scenes, color, and checkerboard annuli at different visual field eccentricities. Disparity-biased regions defined in either analysis did not show a color bias, suggesting that disparity and color contribute to different computations within IT. Scene-biased regions responded preferentially to near and far stimuli (compared with stimuli without disparity) and had a peripheral visual field bias, whereas face patches had a marked near bias and a central visual field bias. These results support the idea that IT is organized by a coarse eccentricity map, and show that disparity likely contributes to computations associated with both central (face processing) and peripheral (scene processing) visual field biases, but likely does not contribute much to computations within IT that are implicated in processing color. PMID:25926470
Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko
2008-11-25
We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Use of Context in Video Processing
NASA Astrophysics Data System (ADS)
Wu, Chen; Aghajan, Hamid
Interpreting an event or a scene based on visual data often requires additional contextual information. Contextual information may be obtained from different sources. In this chapter, we discuss two broad categories of contextual sources: environmental context and user-centric context. Environmental context refers to information derived from domain knowledge or from concurrently sensed effects in the area of operation. User-centric context refers to information obtained and accumulated from the user. Both types of context can include static or dynamic contextual elements. Examples from a smart home environment are presented to illustrate how different types of contextual data can be applied to aid the decision-making process.
Mullen, Kathy T; Chang, Dorita H F; Hess, Robert F
2015-12-01
There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High-contrast red-green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross-adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross-adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Semantic congruency and the (reversed) Colavita effect in children and adults.
Wille, Claudia; Ebersbach, Mirjam
2016-01-01
When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.
Primary visual response (M100) delays in adolescents with FASD as measured with MEG.
Coffman, Brian A; Kodituwakku, Piyadasa; Kodituwakku, Elizabeth L; Romero, Lucinda; Sharadamma, Nirupama Muniswamy; Stone, David; Stephen, Julia M
2013-11-01
Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. Copyright © 2012 Wiley Periodicals, Inc.
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Ground Penetrating Radar as a Contextual Sensor for Multi-Sensor Radiological Characterisation
Ukaegbu, Ikechukwu K.; Gamage, Kelum A. A.
2017-01-01
Radioactive sources exist in environments or contexts that influence how they are detected and localised. For instance, the context of a moving source is different from a stationary source because of the effects of motion. The need to incorporate this contextual information in the radiation detection and localisation process has necessitated the integration of radiological and contextual sensors. The benefits of the successful integration of both types of sensors is well known and widely reported in fields such as medical imaging. However, the integration of both types of sensors has also led to innovative solutions to challenges in characterising radioactive sources in non-medical applications. This paper presents a review of such recent applications. It also identifies that these applications mostly use visual sensors as contextual sensors for characterising radiation sources. However, visual sensors cannot retrieve contextual information about radioactive wastes located in opaque environments encountered at nuclear sites, e.g., underground contamination. Consequently, this paper also examines ground-penetrating radar (GPR) as a contextual sensor for characterising this category of wastes and proposes several ways of integrating data from GPR and radiological sensors. Finally, it demonstrates combined GPR and radiation imaging for three-dimensional localisation of contamination in underground pipes using radiation transport and GPR simulations. PMID:28387706
Dhont, Kristof; Van Hiel, Alain; Pattyn, Sven; Onraet, Emma; Severens, Els
2012-03-01
The present study investigates patterns of event-related brain potentials following the presentation of attitudinal stimuli among political moderates (N=12) and anarchists (N=11). We used a modified oddball paradigm to investigate the evaluative inconsistency effect elicited by stimuli embedded in a sequence of contextual stimuli with an opposite valence. Increased late positive potentials (LPPs) of extreme political attitudes were observed. Moreover, this LPP enhancement was larger among anarchists than among moderates, indicating that an extreme political attitude of a moderate differs from an extreme political attitude of an anarchist. The discussion elaborates on the meaning of attitude extremity for moderates and extremists. © The Author (2011). Published by Oxford University Press.
Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Body Context and Posture Affect Mental Imagery of Hands
Ionta, Silvio; Perruchoud, David; Draganski, Bogdan; Blanke, Olaf
2012-01-01
Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively. PMID:22479618
Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli
Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.
2010-01-01
Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356
A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders
Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence
2015-01-01
Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928
Deepened Extinction following Compound Stimulus Presentation: Noradrenergic Modulation
ERIC Educational Resources Information Center
Janak, Patricia H.; Corbit, Laura H.
2011-01-01
Behavioral extinction is an active form of new learning involving the prediction of nonreward where reward has previously been present. The expression of extinction learning can be disrupted by the presentation of reward itself or reward-predictive stimuli (reinstatement) as well as the passage of time (spontaneous recovery) or contextual changes…
Evaluation of Contextual Variability in Prediction of Reinforcer Effectiveness
ERIC Educational Resources Information Center
Pino, Olimpia; Dazzi, Carla
2005-01-01
Previous research has shown that stimulus preference assessments based on caregiver-opinion did not coincide with results of a more systematic method of assessing reinforcing value unless stimuli that were assessed to represent preferences were also preferred on paired stimulus presentation format, and that the relative preference based on the…
Linguistic Attention Control: Attention Shifting Governed by Grammaticized Elements of Language
ERIC Educational Resources Information Center
Taube-Schiff, Marlene; Segalowitz, Norman
2005-01-01
In 2 experiments, the authors investigated attention control for tasks involving the processing of grammaticized linguistic stimuli (function words) contextualized in sentence fragments. Attention control was operationalized as shift costs obtained with adult speakers of English in an alternating-runs experimental design (R. D. Rogers & S.…
Repetition priming of face recognition in a serial choice reaction-time task.
Roberts, T; Bruce, V
1989-05-01
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.
Brain activation by visual erotic stimuli in healthy middle aged males.
Kim, S W; Sohn, D W; Cho, Y-H; Yang, W S; Lee, K-U; Juh, R; Ahn, K-J; Chung, Y-A; Han, S-I; Lee, K H; Lee, C U; Chae, J-H
2006-01-01
The objective of the present study was to identify brain centers, whose activity changes are related to erotic visual stimuli in healthy, heterosexual, middle aged males. Ten heterosexual, right-handed males with normal sexual function were entered into the present study (mean age 52 years, range 46-55). All potential subjects were screened over 1 h interview, and were encouraged to fill out questionnaires including the Brief Male Sexual Function Inventory. All subjects with a history of sexual arousal disorder or erectile dysfunction were excluded. We performed functional brain magnetic resonance imaging (fMRI) in male volunteers when an alternatively combined erotic and nonerotic film was played for 14 min and 9 s. The major areas of activation associated with sexual arousal to visual stimuli were occipitotemporal area, anterior cingulate gyrus, insula, orbitofrontal cortex, caudate nucleus. However, hypothalamus and thalamus were not activated. We suggest that the nonactivation of hypothalamus and thalamus in middle aged males may be responsible for the lesser physiological arousal in response to the erotic visual stimuli.
Region segmentation and contextual cuing in visual search.
Conci, Markus; von Mühlenen, Adrian
2009-10-01
Contextual information provides an important source for behavioral orienting. For instance, in the contextual-cuing paradigm, repetitions of the spatial layout of elements in a search display can guide attention to the target location. The present study explored how this contextual-cuing effect is influenced by the grouping of search elements. In Experiment 1, four nontarget items could be arranged collinearly to form an imaginary square. The presence of such a square eliminated the contextual-cuing effect, despite the fact that the square's location still had a predictive value for the target location. Three follow-up experiments demonstrated that other types of grouping abolished contextual cuing in a similar way and that the mere presence of a task-irrelevant singleton had only a diminishing effect (by half) on contextual cuing. These findings suggest that a segmented, salient region can interfere with contextual cuing, reducing its predictive impact on search.
Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas
2011-01-01
How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432
Renfro, Kaytlin J; Rupp, Heather; Wallen, Kim
2015-09-01
Recent work suggests that a woman's hormonal state when first exposed to visual sexual stimuli (VSS) modulates her initial and subsequent responses to VSS. The present study investigated whether women's initial hormonal state was related to their subjective ratings of VSS, and whether this relationship differed with VSS content. We reanalyzed previously collected data from 14 naturally cycling (NC) women and 14 women taking oral contraceptives (OCs), who subjectively rated VSS at three hormonal time-points. NC women's ratings of 216 unique sexual images were collected during the menstrual, periovulatory, and luteal phases of their menstrual cycles, and OC women's ratings were collected at comparable time-points across their pill-cycles. NC women's initial hormonal state was not related to their ratings of VSS. OC women's initial hormonal state predicted their ratings of VSS with minimal contextual information and of images depicting female-to-male oral sex. Specifically, women who entered the study in the third week of their pill-cycle (OC-3 women) rated such images as less attractive at all testing sessions than did all other women. OC-3 women were also the only women to rate decontextualized VSS as unattractive at all testing sessions. These results corroborate previous studies in which women's initial hormonal state was found to predict subsequent interest in sexual stimuli. Future work, with larger samples, should more directly investigate whether OC-3 women's negative assessment of specific types of VSS reflects a reaction to the laboratory environment or a broader mechanism, wherein OC women's sexual interests decrease late in their pill-cycle. Copyright © 2015 Elsevier Inc. All rights reserved.
Implicit Learning of Viewpoint-Independent Spatial Layouts
Tsuchiai, Taiga; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi
2012-01-01
We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations. PMID:22740837
Memory under pressure: secondary-task effects on contextual cueing of visual search.
Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas
2013-11-04
Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Neural mechanism for sensing fast motion in dim light.
Li, Ran; Wang, Yi
2013-11-07
Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.
Contextual Cueing Effects Across the Lifespan
MERRILL, EDWARD C.; CONNERS, FRANCES A.; ROSKOS, BEVERLY; KLINGER, MARK R.; KLINGER, LAURA GROFER
2018-01-01
The authors evaluated age-related variations in contextual cueing, which reflects the extent to which visuospatial regularities can facilitate search for a target. Previous research produced inconsistent results regarding contextual cueing effects in young children and in older adults, and no study has investigated the phenomenon across the life span. Three groups (6, 20, and 70 years old) were compared. Participants located a designated target stimulus embedded in a context of distractor stimuli. During exposure, the location of the target could be predicted from the location of the distracters in each display. During test, these predictable displays were intermixed with new displays that did not predict the target location. Response times to locating predictable relative to unpredictable targets were compared. All groups exhibited facilitation effects greater than 0 (95% CIs [.02, .11], d = .4; [.01, .12], d = .4; and [.01, .10], d = .4, for the children, young adults, and older adults, respectively) indicating that contextual cueing is robust across a wide age range. The relative magnitude of contextual cueing effects was essentially identical across the age range tested, F(2, 103) = 1.71,ηρ2 = .02. The authors argue that a mechanism that uses environmental covariation is available to all age ranges, but the expression of the contextual cueing may depend on the way it is measured. PMID:23991612
Yoshida, Masahide; Takayanagi, Yuki
2014-01-01
Fear responses play evolutionarily beneficial roles, although excessive fear memory can induce inappropriate fear expression observed in posttraumatic stress disorder, panic disorder, and phobia. To understand the neural machineries that underlie these disorders, it is important to clarify the neural pathways of fear responses. Contextual conditioned fear induces freezing behavior and neuroendocrine responses. Considerable evidence indicates that the central amygdala plays an essential role in expression of freezing behavior after contextual conditioned fear. On the other hand, mechanisms of neuroendocrine responses remain to be clarified. The medial amygdala (MeA), which is activated after contextual conditioned fear, was lesioned bilaterally by infusion of N-methyl-d-aspartate after training of fear conditioning. Plasma oxytocin, ACTH, and prolactin concentrations were significantly increased after contextual conditioned fear in sham-lesioned rats. In MeA-lesioned rats, these neuroendocrine responses but not freezing behavior were significantly impaired compared with those in sham-lesioned rats. In contrast, the magnitudes of neuroendocrine responses after exposure to novel environmental stimuli were not significantly different in MeA-lesioned rats and sham-lesioned rats. Contextual conditioned fear activated prolactin-releasing peptide (PrRP)-synthesizing neurons in the medulla oblongata. In MeA-lesioned rats, the percentage of PrRP-synthesizing neurons activated after contextual conditioned fear was significantly decreased. Furthermore, neuroendocrine responses after contextual conditioned fear disappeared in PrRP-deficient mice. Our findings suggest that the MeA-medullary PrRP-synthesizing neuron pathway plays an important role in neuroendocrine responses to contextual conditioned fear. PMID:24877622
Yoshida, Masahide; Takayanagi, Yuki; Onaka, Tatsushi
2014-08-01
Fear responses play evolutionarily beneficial roles, although excessive fear memory can induce inappropriate fear expression observed in posttraumatic stress disorder, panic disorder, and phobia. To understand the neural machineries that underlie these disorders, it is important to clarify the neural pathways of fear responses. Contextual conditioned fear induces freezing behavior and neuroendocrine responses. Considerable evidence indicates that the central amygdala plays an essential role in expression of freezing behavior after contextual conditioned fear. On the other hand, mechanisms of neuroendocrine responses remain to be clarified. The medial amygdala (MeA), which is activated after contextual conditioned fear, was lesioned bilaterally by infusion of N-methyl-d-aspartate after training of fear conditioning. Plasma oxytocin, ACTH, and prolactin concentrations were significantly increased after contextual conditioned fear in sham-lesioned rats. In MeA-lesioned rats, these neuroendocrine responses but not freezing behavior were significantly impaired compared with those in sham-lesioned rats. In contrast, the magnitudes of neuroendocrine responses after exposure to novel environmental stimuli were not significantly different in MeA-lesioned rats and sham-lesioned rats. Contextual conditioned fear activated prolactin-releasing peptide (PrRP)-synthesizing neurons in the medulla oblongata. In MeA-lesioned rats, the percentage of PrRP-synthesizing neurons activated after contextual conditioned fear was significantly decreased. Furthermore, neuroendocrine responses after contextual conditioned fear disappeared in PrRP-deficient mice. Our findings suggest that the MeA-medullary PrRP-synthesizing neuron pathway plays an important role in neuroendocrine responses to contextual conditioned fear.
Tohmi, Manavu; Kitaura, Hiroki; Komagata, Seiji; Kudoh, Masaharu; Shibuki, Katsuei
2006-11-08
Experience-dependent plasticity in the visual cortex was investigated using transcranial flavoprotein fluorescence imaging in mice anesthetized with urethane. On- and off-responses in the primary visual cortex were elicited by visual stimuli. Fluorescence responses and field potentials elicited by grating patterns decreased similarly as contrasts of visual stimuli were reduced. Fluorescence responses also decreased as spatial frequency of grating stimuli increased. Compared with intrinsic signal imaging in the same mice, fluorescence imaging showed faster responses with approximately 10 times larger signal changes. Retinotopic maps in the primary visual cortex and area LM were constructed using fluorescence imaging. After monocular deprivation (MD) of 4 d starting from postnatal day 28 (P28), deprived eye responses were suppressed compared with nondeprived eye responses in the binocular zone but not in the monocular zone. Imaging faithfully recapitulated a critical period for plasticity with maximal effects of MD observed around P28 and not in adulthood even under urethane anesthesia. Visual responses were compared before and after MD in the same mice, in which the skull was covered with clear acrylic dental resin. Deprived eye responses decreased after MD, whereas nondeprived eye responses increased. Effects of MD during a critical period were tested 2 weeks after reopening of the deprived eye. Significant ocular dominance plasticity was observed in responses elicited by moving grating patterns, but no long-lasting effect was found in visual responses elicited by light-emitting diode light stimuli. The present results indicate that transcranial flavoprotein fluorescence imaging is a powerful tool for investigating experience-dependent plasticity in the mouse visual cortex.
Organic light emitting board for dynamic interactive display
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-01-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151
Gestalt perception modulates early visual processing.
Herrmann, C S; Bosch, V
2001-04-17
We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Left hemispheric advantage for numerical abilities in the bottlenose dolphin.
Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur
2005-02-28
In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.
Distractor devaluation requires visual working memory.
Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E
2009-02-01
Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.
Organic light emitting board for dynamic interactive display
NASA Astrophysics Data System (ADS)
Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin
2017-04-01
Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.
Neural circuits underlying visually evoked escapes in larval zebrafish
Dunn, Timothy W.; Gebhardt, Christoph; Naumann, Eva A.; Riegler, Clemens; Ahrens, Misha B.; Engert, Florian; Del Bene, Filippo
2015-01-01
SUMMARY Escape behaviors deliver organisms away from imminent catastrophe. Here, we characterize behavioral responses of freely swimming larval zebrafish to looming visual stimuli simulating predators. We report that the visual system alone can recruit lateralized, rapid escape motor programs, similar to those elicited by mechanosensory modalities. Two-photon calcium imaging of retino-recipient midbrain regions isolated the optic tectum as an important center processing looming stimuli, with ensemble activity encoding the critical image size determining escape latency. Furthermore, we describe activity in retinal ganglion cell terminals and superficial inhibitory interneurons in the tectum during looming and propose a model for how temporal dynamics in tectal periventricular neurons might arise from computations between these two fundamental constituents. Finally, laser ablations of hindbrain circuitry confirmed that visual and mechanosensory modalities share the same premotor output network. Together, we establish a circuit for the processing of aversive stimuli in the context of an innate visual behavior. PMID:26804997
Illusory visual motion stimulus elicits postural sway in migraine patients
Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi
2015-01-01
Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
Li, Chenglin; Cao, Xiaohua
2017-01-01
For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different. PMID:29018391
Li, Chenglin; Cao, Xiaohua
2017-01-01
For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different.
Multiscale neural connectivity during human sensory processing in the brain
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Runnova, Anastasia E.; Frolov, Nikita S.; Makarov, Vladimir V.; Nedaivozov, Vladimir; Koronovskii, Alexey A.; Pisarchik, Alexander; Hramov, Alexander E.
2018-05-01
Stimulus-related brain activity is considered using wavelet-based analysis of neural interactions between occipital and parietal brain areas in alpha (8-12 Hz) and beta (15-30 Hz) frequency bands. We show that human sensory processing related to the visual stimuli perception induces brain response resulted in different ways of parieto-occipital interactions in these bands. In the alpha frequency band the parieto-occipital neuronal network is characterized by homogeneous increase of the interaction between all interconnected areas both within occipital and parietal lobes and between them. In the beta frequency band the occipital lobe starts to play a leading role in the dynamics of the occipital-parietal network: The perception of visual stimuli excites the visual center in the occipital area and then, due to the increase of parieto-occipital interactions, such excitation is transferred to the parietal area, where the attentional center takes place. In the case when stimuli are characterized by a high degree of ambiguity, we find greater increase of the interaction between interconnected areas in the parietal lobe due to the increase of human attention. Based on revealed mechanisms, we describe the complex response of the parieto-occipital brain neuronal network during the perception and primary processing of the visual stimuli. The results can serve as an essential complement to the existing theory of neural aspects of visual stimuli processing.
Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).
Do visually salient stimuli reduce children's risky decisions?
Schwebel, David C; Lucas, Elizabeth K; Pearson, Alana
2009-09-01
Children tend to overestimate their physical abilities, and that tendency is related to risk for unintentional injury. This study tested whether or not children estimate their physical ability differently when exposed to stimuli that were highly visually salient due to fluorescent coloring. Sixty-nine 6-year-olds judged physical ability to complete laboratory-based physical tasks. Half judged ability using tasks that were painted black; the other half judged the same tasks, but the stimuli were striped black and fluorescent lime-green. Results suggest the two groups judged similarly, but children took longer to judge perceptually ambiguous tasks when those tasks were visually salient. In other words, visual salience increased decision-making time but not accuracy of judgment. These findings held true after controlling for demographic and temperament characteristics.
Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.
Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti
2017-05-10
Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner. Copyright © 2017 the authors 0270-6474/17/374942-12$15.00/0.
Contextual cueing of pop-out visual search: when context guides the deployment of attention.
Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J
2010-05-01
Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.
Sugimoto, Fumie; Kimura, Motohiro; Takeda, Yuji; Katayama, Jun'ichi
2017-08-16
In a three-stimulus oddball task, the amplitude of P3a elicited by deviant stimuli increases with an increase in the difficulty of discriminating between standard and target stimuli (i.e. task-difficulty effect on P3a), indicating that attentional capture by deviant stimuli is enhanced with an increase in task difficulty. This enhancement of attentional capture may be explained in terms of the modulation of modality-nonspecific temporal attention; that is, the participant's attention directed to the predicted timing of stimulus presentation is stronger when the task difficulty increases, which results in enhanced attentional capture. The present study examined this possibility with a modified three-stimulus oddball task consisting of a visual standard, a visual target, and four types of deviant stimuli defined by a combination of two modalities (visual and auditory) and two presentation timings (predicted and unpredicted). We expected that if the modulation of temporal attention is involved in enhanced attentional capture, then the task-difficulty effect on P3a should be reduced for unpredicted compared with predicted deviant stimuli irrespective of their modality; this is because the influence of temporal attention should be markedly weaker for unpredicted compared with predicted deviant stimuli. The results showed that the task-difficulty effect on P3a was significantly reduced for unpredicted compared with predicted deviant stimuli in both the visual and the auditory modalities. This result suggests that the modulation of modality-nonspecific temporal attention induced by the increase in task difficulty is at least partly involved in the enhancement of attentional capture by deviant stimuli.
Pop-out in visual search of moving targets in the archer fish.
Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen
2015-03-10
Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
The “Visual Shock” of Francis Bacon: an essay in neuroesthetics
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812
The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.
Zeki, Semir; Ishizu, Tomohiro
2013-01-01
In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.
Taniguchi, Darcy A. A.; Gagnon, Yakir; Wheeler, Benjamin R.; Johnsen, Sönke; Jaffe, Jules S.
2015-01-01
Cuttlefish are cephalopods capable of rapid camouflage responses to visual stimuli. However, it is not always clear to what these animals are responding. Previous studies have found cuttlefish to be more responsive to lateral stimuli rather than substrate. However, in previous works, the cuttlefish were allowed to settle next to the lateral stimuli. In this study, we examine whether juvenile cuttlefish (Sepia officinalis) respond more strongly to visual stimuli seen on the sides versus the bottom of an experimental aquarium, specifically when the animals are not allowed to be adjacent to the tank walls. We used the Sub Sea Holodeck, a novel aquarium that employs plasma display screens to create a variety of artificial visual environments without disturbing the animals. Once the cuttlefish were acclimated, we compared the variability of camouflage patterns that were elicited from displaying various stimuli on the bottom versus the sides of the Holodeck. To characterize the camouflage patterns, we classified them in terms of uniform, disruptive, and mottled patterning. The elicited camouflage patterns from different bottom stimuli were more variable than those elicited by different side stimuli, suggesting that S. officinalis responds more strongly to the patterns displayed on the bottom than the sides of the tank. We argue that the cuttlefish pay more attention to the bottom of the Holodeck because it is closer and thus more relevant for camouflage. PMID:26465786
Audio-visual synchrony and feature-selective attention co-amplify early visual processing.
Keitel, Christian; Müller, Matthias M
2016-05-01
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Statistical regularities in art: Relations with visual coding and perception.
Graham, Daniel J; Redies, Christoph
2010-07-21
Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Testing memory for unseen visual stimuli in patients with extinction and spatial neglect.
Vuilleumier, Patrik; Schwartz, Sophie; Clarke, Karen; Husain, Masud; Driver, Jon
2002-08-15
Visual extinction after right parietal damage involves a loss of awareness for stimuli in the contralesional field when presented concurrently with ipsilesional stimuli, although contralesional stimuli are still perceived if presented alone. However, extinguished stimuli can still receive some residual on-line processing, without awareness. Here we examined whether such residual processing of extinguished stimuli can produce implicit and/or explicit memory traces lasting many minutes. We tested four patients with right parietal damage and left extinction on two sessions, each including distinct study and subsequent test phases. At study, pictures of objects were shown briefly in the right, left, or both fields. Patients were asked to name them without memory instructions (Session 1) or to make an indoor/outdoor categorization and memorize them (Session 2). They extinguished most left stimuli on bilateral presentation. During the test (up to 48 min later), fragmented pictures of the previously exposed objects (or novel objects) were presented alone in either field. Patients had to identify each object and then judge whether it had previously been exposed. Identification of fragmented pictures was better for previously exposed objects that had been consciously seen and critically also for objects that had been extinguished (as compared with novel objects), with no influence of the depth of processing during study. By contrast, explicit recollection occurred only for stimuli that were consciously seen at study and increased with depth of processing. These results suggest implicit but not explicit memory for extinguished visual stimuli in parietal patients.
Visual Memories Bypass Normalization.
Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam
2018-05-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.
Visual Memories Bypass Normalization
Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam
2018-01-01
How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038
Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture
ERIC Educational Resources Information Center
West, Greg L.; Anderson, Adam A. K.; Pratt, Jay
2009-01-01
Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…
Visual Categorization of Natural Movies by Rats
Vinken, Kasper; Vermaercke, Ben
2014-01-01
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598
Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry
O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte
2013-01-01
Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
Visual feedback in stuttering therapy
NASA Astrophysics Data System (ADS)
Smolka, Elzbieta
1997-02-01
The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing
2017-09-01
Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Timing the impact of literacy on visual processing
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460
Timing the impact of literacy on visual processing.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
2014-12-09
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
Visual and vestibular components of motion sickness.
Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D
1996-10-01
The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.
Can, Wang; Zhuoran, Zhao; Zheng, Jin
2017-04-01
In the past 10 years, thousands of people have claimed to be affected by trypophobia, which is the fear of objects with small holes. Recent research suggests that people do not fear the holes; rather, images of clustered holes, which share basic visual characteristics with venomous organisms, lead to nonconscious fear. In the present study, both self-reported measures and the Preschool Single Category Implicit Association Test were adapted for use with preschoolers to investigate whether discomfort related to trypophobic stimuli was grounded in their visual features or based on a nonconsciously associated fear of venomous animals. The results indicated that trypophobic stimuli were associated with discomfort in children. This discomfort seemed to be related to the typical visual characteristics and pattern properties of trypophobic stimuli rather than to nonconscious associations with venomous animals. The association between trypophobic stimuli and venomous animals vanished when the typical visual characteristics of trypophobic features were removed from colored photos of venomous animals. Thus, the discomfort felt toward trypophobic images might be an instinctive response to their visual characteristics rather than the result of a learned but nonconscious association with venomous animals. Therefore, it is questionable whether it is justified to legitimize trypophobia.
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
Shape and color conjunction stimuli are represented as bound objects in visual working memory.
Luria, Roy; Vogel, Edward K
2011-05-01
The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
The acquisition of contextual cueing effects by persons with and without intellectual disability.
Merrill, Edward C; Conners, Frances A; Yang, Yingying; Weathington, Dana
2014-10-01
Two experiments were conducted to compare the acquisition of contextual cueing effects of adolescents and young adults with intellectual disabilities (ID) relative to typically developing children and young adults. Contextual cueing reflects an implicit, memory based attention guidance mechanism that results in faster search for target locations that have been previously experienced in a predictable context. In the study, participants located a target stimulus embedded in a context of numerous distracter stimuli. During a learning phase, the location of the target was predictable from the location of the distracters in the search displays. We then compared response times to locating predictable relative to unpredictable targets presented in a test phase. In Experiment 1, all of the distracters predicted the location of the target. In Experiment 2, half of the distracters predicted the location of the target while the other half varied randomly. The participants with ID exhibited significant contextual facilitation in both experiments, with the magnitude of facilitation being similar to that of the typically developing (TD) children and adults. We concluded that deficiencies in contextual cueing are not necessarily associated with low measured intelligence that results in a classification of ID. Copyright © 2014 Elsevier Ltd. All rights reserved.
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
Zupan, Barbra; Neumann, Dawn
2016-01-01
The current study presented 60 people with traumatic brain injury (TBI) and 60 controls with isolated facial emotion expressions, isolated vocal emotion expressions, and multimodal (i.e., film clips) stimuli that included contextual cues. All stimuli were presented via computer. Participants were required to indicate how the person in each stimulus was feeling using a forced-choice format. Additionally, for the film clips, participants had to indicate how they felt in response to the stimulus, and the level of intensity with which they experienced that emotion. PMID:27213280
Magnetic stimulation of visual cortex impairs perceptual learning.
Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio
2016-12-01
The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.
1980-02-01
ADOAA82 342 OKLAHOMA UNIV NORMAN COLL OF EDUCATION F/B 5/9 TASK ANALYSIS SCHEMA BASED ON COGNITIVE STYLE AND SUPPLANFATION--ETC(U) FEB GO F B AUSBURN...separately- perceived fragments) 6. Tasks requiring use of a. Visual/haptic (pre- kinesthetic or tactile ference for kinesthetic stimuli stimuli; ability...to transform kinesthetic stimuli into visual images; ability to learn directly from tactile or kinesthet - ic impressions) b. Field independence/de
The effect of spatial attention on invisible stimuli.
Shin, Kilho; Stolte, Moritz; Chong, Sang Chul
2009-10-01
The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.
Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara
2012-03-01
The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.
Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara
2012-01-01
The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention; memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that: a) biologically emotional images hold attention more strongly than socially emotional images, b) memory for biologically emotional images was enhanced even with limited cognitive resources, but c) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images’ subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in visual cortex and greater functional connectivity between amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between amygdala and MPFC than biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity. PMID:21964552
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects.
Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A
2008-11-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalise to priming of unfamiliar visual objects. Implications for theoretical models of object decision priming are discussed.
Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects
Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.
2008-01-01
According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167
Gamma band activity and the P3 reflect post-perceptual processes, not visual awareness
Pitts, Michael A.; Padwal, Jennifer; Fennelly, Daniel; Martínez, Antígona; Hillyard, Steven A.
2014-01-01
A primary goal in cognitive neuroscience is to identify neural correlates of conscious perception (NCC). By contrasting conditions in which subjects are aware versus unaware of identical visual stimuli, a number of candidate NCCs have emerged, among them induced gamma band activity in the EEG and the P3 event-related potential. In most previous studies, however, the critical stimuli were always directly relevant to the subjects’ task, such that aware versus unaware contrasts may well have included differences in post-perceptual processing in addition to differences in conscious perception per se. Here, in a series of EEG experiments, visual awareness and task relevance were manipulated independently. Induced gamma activity and the P3 were absent for task-irrelevant stimuli regardless of whether subjects were aware of such stimuli. For task-relevant stimuli, gamma and the P3 were robust and dissociable, indicating that each reflects distinct post-perceptual processes necessary for carrying-out the task but not for consciously perceiving the stimuli. Overall, this pattern of results challenges a number of previous proposals linking gamma band activity and the P3 to conscious perception. PMID:25063731
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Decoding complex flow-field patterns in visual working memory.
Christophel, Thomas B; Haynes, John-Dylan
2014-05-01
There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Lexical Effects on Speech Perception in Individuals with "Autistic" Traits
ERIC Educational Resources Information Center
Stewart, Mary E.; Ota, Mitsuhiko
2008-01-01
It has been claimed that Autism Spectrum Disorder (ASD) is characterized by a limited ability to process perceptual stimuli in reference to the contextual information of the percept. Such a connection between a nonholistic processing style and behavioral traits associated with ASD is thought to exist also within the neurotypical population albeit…
ERIC Educational Resources Information Center
Balconi, Michela; Carrera, Alba
2007-01-01
The paper explored conceptual and lexical skills with regard to emotional correlates of facial stimuli and scripts. In two different experimental phases normal and autistic children observed six facial expressions of emotions (happiness, anger, fear, sadness, surprise, and disgust) and six emotional scripts (contextualized facial expressions). In…
On Transitions between Representations: The Role of Contextual Reasoning in Calculus Problem Solving
ERIC Educational Resources Information Center
Zazkis, Dov
2016-01-01
This article argues for a shift in how researchers discuss and examine students' uses and understandings of multiple representations within a calculus context. An extension of Zazkis, Dubinsky, and Dautermann's (1996) visualization/analysis framework to include contextual reasoning is proposed. Several examples that detail transitions between…
The Effect of Verbal Contextual Information in Processing Visual Art.
ERIC Educational Resources Information Center
Koroscik, Judith S.; And Others
1985-01-01
Verbal contextual information affected photography and nonphotography students' performance on semantic retention tests. For example, correct titles aided the formation and retention of accurate memories, while erroneous titles misled students into remembering meanings that had relatively little to do with what was actually pictured in the…
Selective attention determines emotional responses to novel visual stimuli.
Raymond, Jane E; Fenske, Mark J; Tavassoli, Nader T
2003-11-01
Distinct complex brain systems support selective attention and emotion, but connections between them suggest that human behavior should reflect reciprocal interactions of these systems. Although there is ample evidence that emotional stimuli modulate attentional processes, it is not known whether attention influences emotional behavior. Here we show that evaluation of the emotional tone (cheery/dreary) of complex but meaningless visual patterns can be modulated by the prior attentional state (attending vs. ignoring) used to process each pattern in a visual selection task. Previously ignored patterns were evaluated more negatively than either previously attended or novel patterns. Furthermore, this emotional devaluation of distracting stimuli was robust across different emotional contexts and response scales. Finding that negative affective responses are specifically generated for ignored stimuli points to a new functional role for attention and elaborates the link between attention and emotion. This finding also casts doubt on the conventional marketing wisdom that any exposure is good exposure.
Stimulus relevance modulates contrast adaptation in visual cortex
Keller, Andreas J; Houlton, Rachael; Kampa, Björn M; Lesica, Nicholas A; Mrsic-Flogel, Thomas D; Keller, Georg B; Helmchen, Fritjof
2017-01-01
A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex. DOI: http://dx.doi.org/10.7554/eLife.21589.001 PMID:28130922
Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study.
Sperling, Ingmar; Baldofski, Sabrina; Lüthold, Patrick; Hilbert, Anja
2017-08-19
Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED.
Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study
Sperling, Ingmar; Lüthold, Patrick; Hilbert, Anja
2017-01-01
Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED. PMID:28825607
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
2013-01-01
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
2013-01-01
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.
Neural correlates of contextual cueing are modulated by explicit learning.
Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A
2011-10-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.
Neural correlates of contextual cueing are modulated by explicit learning
Westerberg, Carmen E.; Miller, Brennan B.; Reber, Paul J.; Cohen, Neal J.; Paller, Ken A.
2011-01-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer’s knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. PMID:21889947
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Audio-Visual Speech Perception Is Special
ERIC Educational Resources Information Center
Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.
2005-01-01
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…
Dynamic Prototypicality Effects in Visual Search
ERIC Educational Resources Information Center
Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan
2011-01-01
In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…
ERIC Educational Resources Information Center
Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel
2016-01-01
Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…
Determining the Capacity of Time-Based Selection
ERIC Educational Resources Information Center
Watson, Derrick G.; Kunar, Melina A.
2012-01-01
In visual search, a set of distractor items can be suppressed from future selection if they are presented (previewed) before a second set of search items arrive. This "visual marking" mechanism provides a top-down way of prioritizing the selection of new stimuli, at the expense of old stimuli already in the field (Watson & Humphreys,…
Functional neuronal processing of body odors differs from that of similar common odors.
Lundström, Johan N; Boyle, Julie A; Zatorre, Robert J; Jones-Gotman, Marilyn
2008-06-01
Visual and auditory stimuli of high social and ecological importance are processed in the brain by specialized neuronal networks. To date, this has not been demonstrated for olfactory stimuli. By means of positron emission tomography, we sought to elucidate the neuronal substrates behind body odor perception to answer the question of whether the central processing of body odors differs from perceptually similar nonbody odors. Body odors were processed by a network that was distinctly separate from common odors, indicating a separation in the processing of odors based on their source. Smelling a friend's body odor activated regions previously seen for familiar stimuli, whereas smelling a stranger activated amygdala and insular regions akin to what has previously been demonstrated for fearful stimuli. The results provide evidence that social olfactory stimuli of high ecological relevance are processed by specialized neuronal networks similar to what has previously been demonstrated for auditory and visual stimuli.
Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.
Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin
2016-04-01
There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.
Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A
2013-10-01
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
2011-01-01
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
Woi, Pui Juan; Kaur, Sharanjeet; Waugh, Sarah J.; Hairol, Mohd Izzuddin
2016-01-01
The human visual system is sensitive in detecting objects that have different luminance level from their background, known as first-order or luminance-modulated (LM) stimuli. We are also able to detect objects that have the same mean luminance as their background, only differing in contrast (or other attributes). Such objects are known as second-order or contrast-modulated (CM), stimuli. CM stimuli are thought to be processed in higher visual areas compared to LM stimuli, and may be more susceptible to ageing. We compared visual acuities (VA) of five healthy older adults (54.0±1.83 years old) and five healthy younger adults (25.4±1.29 years old) with LM and CM letters under monocular and binocular viewing. For monocular viewing, age had no effect on VA [F(1, 8)= 2.50, p> 0.05]. However, there was a significant main effect of age on VA under binocular viewing [F(1, 8)= 5.67, p< 0.05]. Binocular VA with CM letters in younger adults was approximately two lines better than that in older adults. For LM, binocular summation ratios were similar for older (1.16±0.21) and younger (1.15±0.06) adults. For CM, younger adults had higher binocular summation ratio (1.39±0.08) compared to older adults (1.12±0.09). Binocular viewing improved VA with LM letters for both groups similarly. However, in older adults, binocular viewing did not improve VA with CM letters as much as in younger adults. This could reflect a decline of higher visual areas due to ageing process, most likely higher than V1, which may be missed if measured with luminance-based stimuli alone. PMID:28184281
Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.
Badgaiyan, Rajendra D
2012-12-01
Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.
Sharpe, Melissa J.; Killcross, Simon
2015-01-01
The prelimbic (PL) cortex allows rodents to adapt their responding under changing experimental circumstances. In line with this, the PL cortex has been implicated in strategy set shifting, attentional set shifting, the resolution of response conflict, and the modulation of attention towards predictive stimuli. One interpretation of this research is that the PL cortex is involved in using information garnered from higher-order cues in the environment to modulate how an animal responds to environmental stimuli. However, data supporting this view of PL function in the aversive domain are lacking. In the following experiments, we attempted to answer two questions. Firstly, we wanted to investigate whether the role of the PL cortex in using higher-order cues to influence responding generalizes across appetitive and aversive domains. Secondly, as much of the research has focused on a role for the PL cortex in performance, we wanted to assess whether this region is also involved in the acquisition of hierarchal associations which facilitate an ability to use higher-order cues to modulate responding. In order to answer these questions, we assessed the impact of PL inactivation during both the acquisition and expression of a contextual bi-conditional discrimination. A contextual bi-conditional discrimination involves presenting two stimuli. In one context, one stimulus is paired with shock while the other is presented without shock. In another context, these contingencies are reversed. Thus, animals have to use the present contextual cues to disambiguate the significance of the stimulus and respond appropriately. We found that PL inactivation disrupted both the encoding and expression of these context-dependent associations. This supports a role for the PL cortex in allowing higher-order cues to modulate both learning about, and responding towards, different cues. We discuss these findings in the broader context of functioning in the medial prefrontal cortex (PFC). PMID:25628542
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Color vision in attention-deficit/hyperactivity disorder: a pilot visual evoked potential study.
Kim, Soyeon; Banaschewski, Tobias; Tannock, Rosemary
2015-01-01
Individuals with attention-deficit/hyperactivity disorder (ADHD) are reported to manifest visual problems (including ophthalmological and color perception, particularly for blue-yellow stimuli), but findings are inconsistent. Accordingly, this study investigated visual function and color perception in adolescents with ADHD using color Visual Evoked Potentials (cVEP), which provides an objective measure of color perception. Thirty-one adolescents (aged 13-18), 16 with a confirmed diagnosis of ADHD, and 15 healthy peers, matched for age, gender, and IQ participated in the study. All underwent an ophthalmological exam, as well as electrophysiological testing color Visual Evoked Potentials (cVEP), which measured the latency and amplitude of the neural P1 response to chromatic (blue-yellow, red-green) and achromatic stimuli. No intergroup differences were found in the ophthalmological exam. However, significantly larger P1 amplitude was found for blue and yellow stimuli, but not red/green or achromatic stimuli, in the ADHD group (particularly in the medicated group) compared to controls. Larger amplitude in the P1 component for blue-yellow in the ADHD group compared to controls may account for the lack of difference in color perception tasks. We speculate that the larger amplitude for blue-yellow stimuli in early sensory processing (P1) might reflect a compensatory strategy for underlying problems including compromised retinal input of s-cones due to hypo-dopaminergic tone. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila
Aptekar, Jacob W.; Keleş, Mehmet F.; Lu, Patrick M.; Zolotova, Nadezhda M.
2015-01-01
Many animals rely on visual figure–ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure–ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula—one of the four, primary neuropiles of the fly optic lobe—performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure–ground stimuli in a homologous manner to the behavior; “figure-like” stimuli are coded similar to one another and “ground-like” stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. PMID:25972183
Neurons forming optic glomeruli compute figure-ground discriminations in Drosophila.
Aptekar, Jacob W; Keleş, Mehmet F; Lu, Patrick M; Zolotova, Nadezhda M; Frye, Mark A
2015-05-13
Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. Copyright © 2015 the authors 0270-6474/15/357587-13$15.00/0.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
People, clothing, music, and arousal as contextual retrieval cues in verbal memory.
Standing, Lionel G; Bobbitt, Kristin E; Boisvert, Kathryn L; Dayholos, Kathy N; Gagnon, Anne M
2008-10-01
Four experiments (N = 164) on context-dependent memory were performed to explore the effects on verbal memory of incidental cues during the test session which replicated specific features of the learning session. These features involved (1) bystanders, (2) the clothing of the experimenter, (3) background music, and (4) the arousal level of the subject. Social contextual cues (bystanders or experimenter clothing) improved verbal recall or recognition. However, recall decreased when the contextual cue was a different stimulus taken from the same conceptual category (piano music by Chopin) that was heard during learning. Memory was unaffected by congruent internal cues, produced by the same physiological arousal level (low, moderate, or high heart rate) during the learning and test sessions. However, recall increased with the level of arousal across the three congruent conditions. The results emphasize the effectiveness as retrieval cues of stimuli which are socially salient, concrete, and external.
Pragmatics as Metacognitive Control
Kissine, Mikhail
2016-01-01
The term “pragmatics” is often used to refer without distinction, on one hand, to the contextual selection of interpretation norms and, on the other hand, to the context-sensitive processes guided by these norms. Pragmatics in the first acception depends on language-independent contextual factors that can, but need not, involve Theory of Mind; in the second acception, pragmatics is a language-specific metacognitive process, which may unfold at an unconscious level without involving any mental state (meta-)representation. Distinguishing between these two kinds of ways context drives the interpretation of communicative stimuli helps dissolve the dispute between proponents of an entirely Gricean pragmatics and those who claim that some pragmatic processes do not depend on mind-reading capacities. According to the model defended in this paper, the typology of pragmatic processes is not entirely determined by a hierarchy of meanings, but by contextually set norms of interpretation. PMID:26834671
Pragmatics as Metacognitive Control.
Kissine, Mikhail
2015-01-01
The term "pragmatics" is often used to refer without distinction, on one hand, to the contextual selection of interpretation norms and, on the other hand, to the context-sensitive processes guided by these norms. Pragmatics in the first acception depends on language-independent contextual factors that can, but need not, involve Theory of Mind; in the second acception, pragmatics is a language-specific metacognitive process, which may unfold at an unconscious level without involving any mental state (meta-)representation. Distinguishing between these two kinds of ways context drives the interpretation of communicative stimuli helps dissolve the dispute between proponents of an entirely Gricean pragmatics and those who claim that some pragmatic processes do not depend on mind-reading capacities. According to the model defended in this paper, the typology of pragmatic processes is not entirely determined by a hierarchy of meanings, but by contextually set norms of interpretation.
Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R.; Jafari, Amir H.
2018-01-01
Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies. PMID:29892219
Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H
2018-01-01
Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23-30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.
Age-related differences in audiovisual interactions of semantically different stimuli.
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
2017-01-01
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Computer programming for generating visual stimuli.
Bukhari, Farhan; Kurylo, Daniel D
2008-02-01
Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Neural Mechanisms of Selective Visual Attention.
Moore, Tirin; Zirnsak, Marc
2017-01-03
Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.
Accessory stimulus modulates executive function during stepping task
Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo
2015-01-01
When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321
Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie
2018-03-07
We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations. PMID:29326578
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Locomotion Enhances Neural Encoding of Visual Stimuli in Mouse V1
2017-01-01
Neurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are thought to be initiated by projections from the mesencephalic locomotor region, the latter through a disinhibitory circuit in V1. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a threefold to fivefold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving. SIGNIFICANCE STATEMENT This paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of 3 the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all layers of the visual cortex. Third, we show that the means by which information is enhanced by locomotion differs between the upper layers, where the major effect is the increasing of firing rates, and in layer V, where the major effect is the reduction in noise correlations. PMID:28264980
Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.
Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi
2018-07-01
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Walter, Sabrina; Keitel, Christian; Müller, Matthias M
2016-01-01
Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.
Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin
2015-03-01
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
An exploratory study of temporal integration in the peripheral retina of myopes
NASA Astrophysics Data System (ADS)
Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.
2017-08-01
The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.
The visual attention span deficit in dyslexia is visual and not verbal.
Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane
2012-06-01
The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.
Object perception is selectively slowed by a visually similar working memory load.
Robinson, Alan; Manzi, Alberto; Triesch, Jochen
2008-12-22
The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.
Self-construal differences in neural responses to negative social cues.
Liddell, Belinda J; Felmingham, Kim L; Das, Pritha; Whitford, Thomas J; Malhi, Gin S; Battaglini, Eva; Bryant, Richard A
2017-10-01
Cultures differ substantially in representations of the self. Whereas individualistic cultural groups emphasize an independent self, reflected in processing biases towards centralized salient objects, collectivistic cultures are oriented towards an interdependent self, attending to contextual associations between visual cues. It is unknown how these perceptual biases may affect brain activity in response to negative social cues. Moreover, while some studies have shown that individual differences in self-construal moderate cultural group comparisons, few have examined self-construal differences separate to culture. To investigate these issues, a final sample of a group of healthy participants high in trait levels of collectivistic self-construal (n=16) and individualistic self-construal (n=19), regardless of cultural background, completed a negative social cue evaluation task designed to engage face/object vs context-specific neural processes whilst undergoing fMRI scanning. Between-group analyses revealed that the collectivistic group exclusively engaged the parahippocampal gyrus (parahippocampal place area) - a region critical to contextual integration - during negative face processing - suggesting compensatory activations when contextual information was missing. The collectivist group also displayed enhanced negative context dependent brain activity involving the left superior occipital gyrus/cuneus and right anterior insula. By contrast, the individualistic group did not engage object or localized face processing regions as predicted, but rather demonstrated heightened appraisal and self-referential activations in medial prefrontal and temporoparietal regions to negative contexts - again suggesting compensatory processes when focal cues were absent. While individualists also appeared more sensitive to negative faces in the scenes, activating the right middle cingulate gyrus, dorsal prefrontal and parietal activations, this activity was observed relative to the scrambled baseline, and given that prefrontal and occipital regions were also engaged to neutral stimuli, may suggest an individualistic pattern to processing all social cues more generally. These findings suggest that individual differences in self-construal may be an important organizing framework facilitating perceptual processes to emotionally salient social cues, beyond the boundary of cultural group comparisons. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tan, Bingyao; Mason, Erik; MacLellan, Ben; Bizheva, Kostadinka
2017-02-01
Visually evoked changes of retinal blood flow can serve as an important research tool to investigate eye disease such as glaucoma and diabetic retinopathy. In this study we used a combined, research-grade, high-resolution Doppler OCT+ERG system to study changes in the retinal blood flow (RBF) and retinal neuronal activity in response to visual stimuli of different intensities, durations and type (flicker vs single flash). Specifically, we used white light stimuli of 10 ms and 200 ms single flash, 1s and 2s for flickers stimuli of 20% duty cycle. The study was conducted in-vivo in pigmented rats. Both single flash (SF) and flicker stimuli caused increase in the RBF. The 10 ms SF stimulus did not generate any consistent measurable response, while the 200 ms SF of the same intensity generated 4% change in the RBF peaking at 1.5 s after the stimulus onset. Single flash stimuli introduced 2x smaller change in RBF and 30% earlier RBF peak response compared to flicker stimuli of the same intensity and duration. Doubling the intensity of SF or flicker stimuli increased the RBF peak magnitude by 1.5x. Shortening the flicker stimulus duration by 2x increased the RBF recovery rate by 2x, however, had no effect on the rate of RBF change from baseline to peak.
Visual categorization of natural movies by rats.
Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P
2014-08-06
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.
Lower pitch is larger, yet falling pitches shrink.
Eitan, Zohar; Schupak, Asi; Gotler, Alex; Marks, Lawrence E
2014-01-01
Experiments using diverse paradigms, including speeded discrimination, indicate that pitch and visually-perceived size interact perceptually, and that higher pitch is congruent with smaller size. While nearly all of these studies used static stimuli, here we examine the interaction of dynamic pitch and dynamic size, using Garner's speeded discrimination paradigm. Experiment 1 examined the interaction of continuous rise/fall in pitch and increase/decrease in object size. Experiment 2 examined the interaction of static pitch and size (steady high/low pitches and large/small visual objects), using an identical procedure. Results indicate that static and dynamic auditory and visual stimuli interact in opposite ways. While for static stimuli (Experiment 2), higher pitch is congruent with smaller size (as suggested by earlier work), for dynamic stimuli (Experiment 1), ascending pitch is congruent with growing size, and descending pitch with shrinking size. In addition, while static stimuli (Experiment 2) exhibit both congruence and Garner effects, dynamic stimuli (Experiment 1) present congruence effects without Garner interference, a pattern that is not consistent with prevalent interpretations of Garner's paradigm. Our interpretation of these results focuses on effects of within-trial changes on processing in dynamic tasks and on the association of changes in apparent size with implied changes in distance. Results suggest that static and dynamic stimuli can differ substantially in their cross-modal mappings, and may rely on different processing mechanisms.
Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D
2017-03-01
Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be used for sound therapy for tinnitus relief; it is of interest to determine whether the emotional component of sound treatments can play a role in reversing the negative responses discussed in this paper. Copyright © 2016 Elsevier B.V. All rights reserved.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Independence between implicit and explicit processing as revealed by the Simon effect.
Lo, Shih-Yu; Yeh, Su-Ling
2011-09-01
Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion. Copyright © 2010 Elsevier Inc. All rights reserved.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.