Sample records for influences visual processing

  1. Top-down preparation modulates visual categorization but not subjective awareness of objects presented in natural backgrounds.

    PubMed

    Koivisto, Mika; Kahila, Ella

    2017-04-01

    Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Interactions between attention, context and learning in primary visual cortex.

    PubMed

    Gilbert, C; Ito, M; Kapadia, M; Westheimer, G

    2000-01-01

    Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.

  3. Attention affects visual perceptual processing near the hand.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-09-01

    Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.

  4. Saliency affects feedforward more than feedback processing in early visual cortex.

    PubMed

    Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony

    2013-07-01

    Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Top-Down Beta Enhances Bottom-Up Gamma

    PubMed Central

    Thompson, William H.

    2017-01-01

    Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697

  6. Semantic-based crossmodal processing during visual suppression.

    PubMed

    Cox, Dustin; Hong, Sang Wook

    2015-01-01

    To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.

  7. Emotional Effects in Visual Information Processing

    DTIC Science & Technology

    2009-10-24

    1 Emotional Effects on Visual Information Processing FA4869-08-0004 AOARD 074018 Report October 24, 2009...TITLE AND SUBTITLE Emotional Effects in Visual Information Processing 5a. CONTRACT NUMBER FA48690810004 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...objective of this research project was to investigate how emotion influences visual information processing and the neural correlates of the effects

  8. Changes in visual perspective influence brain activity patterns during cognitive perspective-taking of other people's pain.

    PubMed

    Vistoli, Damien; Achim, Amélie M; Lavoie, Marie-Audrey; Jackson, Philip L

    2016-05-01

    Empathy refers to our capacity to share and understand the emotional states of others. It relies on two main processes according to existing models: an effortless affective sharing process based on neural resonance and a more effortful cognitive perspective-taking process enabling the ability to imagine and understand how others feel in specific situations. Until now, studies have focused on factors influencing the affective sharing process but little is known about those influencing the cognitive perspective-taking process and the related brain activations during vicarious pain. In the present fMRI study, we used the well-known physical pain observation task to examine whether the visual perspective can influence, in a bottom-up way, the brain regions involved in taking others' cognitive perspective to attribute their level of pain. We used a pseudo-dynamic version of this classic task which features hands in painful or neutral daily life situations while orthogonally manipulating: (1) the visual perspective with which hands were presented (first-person versus third-person conditions) and (2) the explicit instructions to imagine oneself or an unknown person in those situations (Self versus Other conditions). The cognitive perspective-taking process was investigated by comparing Other and Self conditions. When examined across both visual perspectives, this comparison showed no supra-threshold activation. Instead, the Other versus Self comparison led to a specific recruitment of the bilateral temporo-parietal junction when hands were presented according to a first-person (but not third-person) visual perspective. The present findings identify the visual perspective as a factor that modulates the neural activations related to cognitive perspective-taking during vicarious pain and show that this complex cognitive process can be influenced by perceptual stages of information processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Time-Resolved Influences of Functional DAT1 and COMT Variants on Visual Perception and Post-Processing

    PubMed Central

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499

  10. Time-resolved influences of functional DAT1 and COMT variants on visual perception and post-processing.

    PubMed

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.

  11. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.

    PubMed

    Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier

    2016-10-01

    Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Audiovisual speech perception development at varying levels of perceptual processing

    PubMed Central

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  13. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  14. Role of Visual Speech in Phonological Processing by Children With Hearing Loss

    PubMed Central

    Jerger, Susan; Tye-Murray, Nancy; Abdi, Hervé

    2011-01-01

    Purpose This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in place of articulation, or conflicting in voicing—for example, the picture “pizza” coupled with the distractors “peach,” “teacher,” or “beast,” respectively. Speed of picture naming was measured. Results The conflicting conditions slowed naming, and phonological processing by children with HL displayed the age-related shift in sensitivity to visual speech seen in children with NH, although with developmental delay. Younger children with HL exhibited a disproportionately large influence of visual speech and a negligible influence of auditory speech, whereas older children with HL showed a robust influence of auditory speech with no benefit to performance from adding visual speech. The congruent conditions did not speed naming in children with HL, nor did the addition of visual speech influence performance. Unexpectedly, the /∧/-vowel congruent distractors slowed naming in children with HL and decreased articulatory proficiency. Conclusions Results for the conflicting conditions are consistent with the hypothesis that speech representations in children with HL (a) are initially disproportionally structured in terms of visual speech and (b) become better specified with age in terms of auditorily encoded information. PMID:19339701

  15. Effects of Presentation Type and Visual Control in Numerosity Discrimination: Implications for Number Processing?

    PubMed Central

    Smets, Karolien; Moors, Pieter; Reynvoet, Bert

    2016-01-01

    Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967

  16. Implied motion language can influence visual spatial memory.

    PubMed

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  17. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  18. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition--A Computational Exploration

    ERIC Educational Resources Information Center

    Hsiao, Janet H.; Lam, Sze Man

    2013-01-01

    Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…

  19. Studies of Visual Attention in Physics Problem Solving

    ERIC Educational Resources Information Center

    Madsen, Adrian M.

    2013-01-01

    The work described here represents an effort to understand and influence visual attention while solving physics problems containing a diagram. Our visual system is guided by two types of processes--top-down and bottom-up. The top-down processes are internal and determined by ones prior knowledge and goals. The bottom-up processes are external and…

  20. The Role of Visual Processing Speed in Reading Speed Development

    PubMed Central

    Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane

    2013-01-01

    A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children. PMID:23593117

  1. The role of visual processing speed in reading speed development.

    PubMed

    Lobier, Muriel; Dubois, Matthieu; Valdois, Sylviane

    2013-01-01

    A steady increase in reading speed is the hallmark of normal reading acquisition. However, little is known of the influence of visual attention capacity on children's reading speed. The number of distinct visual elements that can be simultaneously processed at a glance (dubbed the visual attention span), predicts single-word reading speed in both normal reading and dyslexic children. However, the exact processes that account for the relationship between the visual attention span and reading speed remain to be specified. We used the Theory of Visual Attention to estimate visual processing speed and visual short-term memory capacity from a multiple letter report task in eight and nine year old children. The visual attention span and text reading speed were also assessed. Results showed that visual processing speed and visual short term memory capacity predicted the visual attention span. Furthermore, visual processing speed predicted reading speed, but visual short term memory capacity did not. Finally, the visual attention span mediated the effect of visual processing speed on reading speed. These results suggest that visual attention capacity could constrain reading speed in elementary school children.

  2. Differential effects of non-informative vision and visual interference on haptic spatial processing

    PubMed Central

    van Rheede, Joram J.; Postma, Albert; Kappers, Astrid M. L.

    2008-01-01

    The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality. PMID:18553074

  3. Preserved local but disrupted contextual figure-ground influences in an individual with abnormal function of intermediate visual areas

    PubMed Central

    Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon

    2012-01-01

    Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116

  4. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  5. Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex

    PubMed Central

    McMains, Stephanie; Kastner, Sabine

    2011-01-01

    Multiple stimuli present in the visual field at the same time compete for neural representation by mutually suppressing their evoked activity throughout visual cortex, providing a neural correlate for the limited processing capacity of the visual system. Competitive interactions among stimuli can be counteracted by top-down, goal-directed mechanisms such as attention, and by bottom-up, stimulus-driven mechanisms. Because these two processes cooperate in everyday life to bias processing toward behaviorally relevant or particularly salient stimuli, it has proven difficult to study interactions between top-down and bottom-up mechanisms. Here, we used an experimental paradigm in which we first isolated the effects of a bottom-up influence on neural competition by parametrically varying the degree of perceptual grouping in displays that were not attended. Second, we probed the effects of directed attention on the competitive interactions induced with the parametric design. We found that the amount of attentional modulation varied linearly with the degree of competition left unresolved by bottom-up processes, such that attentional modulation was greatest when neural competition was little influenced by bottom-up mechanisms and smallest when competition was strongly influenced by bottom-up mechanisms. These findings suggest that the strength of attentional modulation in the visual system is constrained by the degree to which competitive interactions have been resolved by bottom-up processes related to the segmentation of scenes into candidate objects. PMID:21228167

  6. The consequence of spatial visual processing dysfunction caused by traumatic brain injury (TBI).

    PubMed

    Padula, William V; Capo-Aponte, Jose E; Padula, William V; Singman, Eric L; Jenness, Jonathan

    2017-01-01

    A bi-modal visual processing model is supported by research to affect dysfunction following a traumatic brain injury (TBI). TBI causes dysfunction of visual processing affecting binocularity, spatial orientation, posture and balance. Research demonstrates that prescription of prisms influence the plasticity between spatial visual processing and motor-sensory systems improving visual processing and reducing symptoms following a TBI. The rationale demonstrates that visual processing underlies the functional aspects of binocularity, balance and posture. The bi-modal visual process maintains plasticity for efficiency. Compromise causes Post Trauma Vision Syndrome (PTVS) and Visual Midline Shift Syndrome (VMSS). Rehabilitation through use of lenses, prisms and sectoral occlusion has inter-professional implications in rehabilitation affecting the plasticity of the bi-modal visual process, thereby improving binocularity, spatial orientation, posture and balance Main outcomes: This review provides an opportunity to create a new perspective of the consequences of TBI on visual processing and the symptoms that are often caused by trauma. It also serves to provide a perspective of visual processing dysfunction that has potential for developing new approaches of rehabilitation. Understanding vision as a bi-modal process facilitates a new perspective of visual processing and the potentials for rehabilitation following a concussion, brain injury or other neurological events.

  7. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  8. How to (and how not to) think about top-down influences on visual perception.

    PubMed

    Teufel, Christoph; Nanay, Bence

    2017-01-01

    The question of whether cognition can influence perception has a long history in neuroscience and philosophy. Here, we outline a novel approach to this issue, arguing that it should be viewed within the framework of top-down information-processing. This approach leads to a reversal of the standard explanatory order of the cognitive penetration debate: we suggest studying top-down processing at various levels without preconceptions of perception or cognition. Once a clear picture has emerged about which processes have influences on those at lower levels, we can re-address the extent to which they should be considered perceptual or cognitive. Using top-down processing within the visual system as a model for higher-level influences, we argue that the current evidence indicates clear constraints on top-down influences at all stages of information processing; it does, however, not support the notion of a boundary between specific types of information-processing as proposed by the cognitive impenetrability hypothesis. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Visual Literacy. . .An Overview of Theory and Practice.

    ERIC Educational Resources Information Center

    DeSantis, Lucille Burbank; Pett, Dennis W.

    Visual Literacy is a field that encompasses a variety of theoretical constructs and practical considerations relating to communicating with visual signs. The theoretical constructs that influence visual communication primarily fall into two closely interrelated categories: those that relate to the individuals involved in the communication process,…

  10. Task- and age-dependent effects of visual stimulus properties on children's explicit numerosity judgments.

    PubMed

    Defever, Emmy; Reynvoet, Bert; Gebuis, Titia

    2013-10-01

    Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  12. The Interplay between Nonsymbolic Number and Its Continuous Visual Properties

    ERIC Educational Resources Information Center

    Gebuis, Titia; Reynvoet, Bert

    2012-01-01

    To date, researchers investigating nonsymbolic number processes devoted little attention to the visual properties of their stimuli. This is unexpected, as nonsymbolic number is defined by its visual characteristics. When number changes, its visual properties change accordingly. In this study, we investigated the influence of different visual…

  13. Gender-specific effects of emotional modulation on visual temporal order thresholds.

    PubMed

    Liang, Wei; Zhang, Jiyuan; Bao, Yan

    2015-09-01

    Emotions affect temporal information processing in the low-frequency time window of a few seconds, but little is known about their effect in the high-frequency domain of some tens of milliseconds. The present study aims to investigate whether negative and positive emotional states influence the ability to discriminate the temporal order of visual stimuli, and whether gender plays a role in temporal processing. Due to the hemispheric lateralization of emotion, a hemispheric asymmetry between the left and the right visual field might be expected. Using a block design, subjects were primed with neutral, negative and positive emotional pictures before performing temporal order judgment tasks. Results showed that male subjects exhibited similarly reduced order thresholds under negative and positive emotional states, while female subjects demonstrated increased threshold under positive emotional state and reduced threshold under negative emotional state. Besides, emotions influenced female subjects more intensely than male subjects, and no hemispheric lateralization was observed. These observations indicate an influence of emotional states on temporal order processing of visual stimuli, and they suggest a gender difference, which is possibly associated with a different emotional stability.

  14. Predictive and postdictive mechanisms jointly contribute to visual awareness.

    PubMed

    Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki

    2009-09-01

    One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.

  15. Prolonged fasting impairs neural reactivity to visual stimulation.

    PubMed

    Kohn, N; Wassenberg, A; Toygar, T; Kellermann, T; Weidenfeld, C; Berthold-Losleben, M; Chechko, N; Orfanos, S; Vocke, S; Laoutidis, Z G; Schneider, F; Karges, W; Habel, U

    2016-01-01

    Previous literature has shown that hypoglycemia influences the intensity of the BOLD signal. A similar but smaller effect may also be elicited by low normal blood glucose levels in healthy individuals. This may not only confound the BOLD signal measured in fMRI, but also more generally interact with cognitive processing, and thus indirectly influence fMRI results. Here we show in a placebo-controlled, crossover, double-blind study on 40 healthy subjects, that overnight fasting and low normal levels of glucose contrasted to an activated, elevated glucose condition have an impact on brain activation during basal visual stimulation. Additionally, functional connectivity of the visual cortex shows a strengthened association with higher-order attention-related brain areas in an elevated blood glucose condition compared to the fasting condition. In a fasting state visual brain areas show stronger coupling to the inferior temporal gyrus. Results demonstrate that prolonged overnight fasting leads to a diminished BOLD signal in higher-order occipital processing areas when compared to an elevated blood glucose condition. Additionally, functional connectivity patterns underscore the modulatory influence of fasting on visual brain networks. Patterns of brain activation and functional connectivity associated with a broad range of attentional processes are affected by maturation and aging and associated with psychiatric disease and intoxication. Thus, we conclude that prolonged fasting may decrease fMRI design sensitivity in any task involving attentional processes when fasting status or blood glucose is not controlled.

  16. Distant influences of amygdala lesion on visual cortical activation during emotional face processing.

    PubMed

    Vuilleumier, Patrik; Richardson, Mark P; Armony, Jorge L; Driver, Jon; Dolan, Raymond J

    2004-11-01

    Emotional visual stimuli evoke enhanced responses in the visual cortex. To test whether this reflects modulatory influences from the amygdala on sensory processing, we used event-related functional magnetic resonance imaging (fMRI) in human patients with medial temporal lobe sclerosis. Twenty-six patients with lesions in the amygdala, the hippocampus or both, plus 13 matched healthy controls, were shown pictures of fearful or neutral faces in task-releant or task-irrelevant positions on the display. All subjects showed increased fusiform cortex activation when the faces were in task-relevant positions. Both healthy individuals and those with hippocampal damage showed increased activation in the fusiform and occipital cortex when they were shown fearful faces, but this was not the case for individuals with damage to the amygdala, even though visual areas were structurally intact. The distant influence of the amygdala was also evidenced by the parametric relationship between amygdala damage and the level of emotional activation in the fusiform cortex. Our data show that combining the fMRI and lesion approaches can help reveal the source of functional modulatory influences between distant but interconnected brain regions.

  17. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task.

    PubMed

    Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe

    2017-01-01

    Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.

  18. Interhemispheric Resource Sharing: Decreasing Benefits with Increasing Processing Efficiency

    ERIC Educational Resources Information Center

    Maertens, M.; Pollmann, S.

    2005-01-01

    Visual matches are sometimes faster when stimuli are presented across visual hemifields, compared to within-field matching. Using a cued geometric figure matching task, we investigated the influence of computational complexity vs. processing efficiency on this bilateral distribution advantage (BDA). Computational complexity was manipulated by…

  19. Object shape and orientation do not routinely influence performance during language processing.

    PubMed

    Rommers, Joost; Meyer, Antje S; Huettig, Falk

    2013-11-01

    The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.

  20. Music reading expertise modulates hemispheric lateralization in English word processing but not in Chinese character processing.

    PubMed

    Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen

    2018-07-01

    Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Local and Global Visual Processing in Autism Spectrum Disorders: Influence of Task and Sample Characteristics and Relation to Symptom Severity

    ERIC Educational Resources Information Center

    Van Eylen, Lien; Boets, Bart; Steyaert, Jean; Wagemans, Johan; Noens, Ilse

    2018-01-01

    Local and global visual processing abilities and processing style were investigated in individuals with autism spectrum disorder (ASD) versus typically developing individuals, children versus adolescents and boys versus girls. Individuals with ASD displayed more attention to detail in daily life, while laboratory tasks showed slightly reduced…

  2. Virtually simulated social pressure influences early visual processing more in low compared to high autonomous participants.

    PubMed

    Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried

    2014-02-01

    In a previous study, we showed that virtually simulated social group pressure could influence early stages of perception after only 100  ms. In the present EEG study, we investigated the influence of social pressure on visual perception in participants with high (HA) and low (LA) levels of autonomy. Ten HA and ten LA individuals were asked to accomplish a visual discrimination task in an adapted paradigm of Solomon Asch. Results indicate that LA participants adapted to the incorrect group opinion more often than HA participants (42% vs. 30% of the trials, respectively). LA participants showed a larger posterior P1 component contralateral to targets presented in the right visual field when conforming to the correct compared to conforming to the incorrect group decision. In conclusion, our ERP data suggest that the group context can have early effects on our perception rather than on conscious decision processes in LA, but not HA participants. Copyright © 2013 Society for Psychophysiological Research.

  3. Odours reduce the magnitude of object substitution masking for matching visual targets in females.

    PubMed

    Robinson, Amanda K; Laning, Julia; Reinhard, Judith; Mattingley, Jason B

    2016-08-01

    Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.

  4. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  5. Environmental influences on neural systems of relational complexity

    PubMed Central

    Kalbfleisch, M. Layne; deBettencourt, Megan T.; Kopperman, Rebecca; Banasiak, Meredith; Roberts, Joshua M.; Halavi, Maryam

    2013-01-01

    Constructivist learning theory contends that we construct knowledge by experience and that environmental context influences learning. To explore this principle, we examined the cognitive process relational complexity (RC), defined as the number of visual dimensions considered during problem solving on a matrix reasoning task and a well-documented measure of mature reasoning capacity. We sought to determine how the visual environment influences RC by examining the influence of color and visual contrast on RC in a neuroimaging task. To specify the contributions of sensory demand and relational integration to reasoning, our participants performed a non-verbal matrix task comprised of color, no-color line, or black-white visual contrast conditions parametrically varied by complexity (relations 0, 1, 2). The use of matrix reasoning is ecologically valid for its psychometric relevance and for its potential to link the processing of psychophysically specific visual properties with various levels of RC during reasoning. The role of these elements is important because matrix tests assess intellectual aptitude based on these seemingly context-less exercises. This experiment is a first step toward examining the psychophysical underpinnings of performance on these types of problems. The importance of this is increased in light of recent evidence that intelligence can be linked to visual discrimination. We submit three main findings. First, color and black-white visual contrast (BWVC) add demand at a basic sensory level, but contributions from color and from BWVC are dissociable in cortex such that color engages a “reasoning heuristic” and BWVC engages a “sensory heuristic.” Second, color supports contextual sense-making by boosting salience resulting in faster problem solving. Lastly, when visual complexity reaches 2-relations, color and visual contrast relinquish salience to other dimensions of problem solving. PMID:24133465

  6. Manipulating Color and Other Visual Information Influences Picture Naming at Different Levels of Processing: Evidence from Alzheimer Subjects and Normal Controls

    ERIC Educational Resources Information Center

    Zannino, Gian Daniele; Perri, Roberta; Salamone, Giovanna; Di Lorenzo, Concetta; Caltagirone, Carlo; Carlesimo, Giovanni A.

    2010-01-01

    There is now a large body of evidence suggesting that color and photographic detail exert an effect on recognition of visually presented familiar objects. However, an unresolved issue is whether these factors act at the visual, the semantic or lexical level of the recognition process. In the present study, we investigated this issue by having…

  7. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  8. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  9. Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.

    PubMed

    Alexander, Gerianne M; Charles, Nora

    2009-06-01

    An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.

  10. Perceptual load influences selective attention across development.

    PubMed

    Couperus, Jane W

    2011-09-01

    Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual selective attention across development from 7 years of age to adulthood. Specifically, the author examined if changes in processing as a function of selective attention are similarly influenced by perceptual load across development. Participants were asked to complete a task at either low or high perceptual load while processing of an unattended probe stimulus was examined using event related potentials. Similar to adults, children and teens showed reduced processing of the unattended stimulus as perceptual load increased at the P1 visual component. However, although there were no qualitative differences in changes in processing, there were quantitative differences, with shorter P1 latencies in teens and adults compared with children, suggesting increases in the speed of processing across development. In addition, younger children did not need as high a perceptual load to achieve the same difference in performance between low and high perceptual load as adults. Thus, this study demonstrates that although there are developmental changes in visual selective attention, the mechanisms by which visual selective attention is achieved in children may share similarities with adults.

  11. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  12. Role of Visual Speech in Phonological Processing by Children with Hearing Loss

    ERIC Educational Resources Information Center

    Jerger, Susan; Tye-Murray, Nancy; Abdi, Herve

    2009-01-01

    Purpose: This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method: Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in…

  13. Visual processing speed in old age.

    PubMed

    Habekost, Thomas; Vogel, Asmus; Rostrup, Egill; Bundesen, Claus; Kyllingsbaek, Søren; Garde, Ellen; Ryberg, Charlotte; Waldemar, Gunhild

    2013-04-01

    Mental speed is a common concept in theories of cognitive aging, but it is difficult to get measures of the speed of a particular psychological process that are not confounded by the speed of other processes. We used Bundesen's (1990) Theory of Visual Attention (TVA) to obtain specific estimates of processing speed in the visual system controlled for the influence of response latency and individual variations of the perception threshold. A total of 33 non-demented old people (69-87 years) were tested for the ability to recognize briefly presented letters. Performance was analyzed by the TVA model. Visual processing speed decreased approximately linearly with age and was on average halved from 70 to 85 years. Less dramatic aging effects were found for the perception threshold and the visual apprehension span. In the visual domain, cognitive aging seems to be most clearly related to reductions in processing speed. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  14. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  15. Top-down beta oscillatory signaling conveys behavioral context in early visual cortex.

    PubMed

    Richter, Craig G; Coppola, Richard; Bressler, Steven L

    2018-05-03

    Top-down modulation of sensory processing is a critical neural mechanism subserving numerous important cognitive roles, one of which may be to inform lower-order sensory systems of the current 'task at hand' by conveying behavioral context to these systems. Accumulating evidence indicates that top-down cortical influences are carried by directed interareal synchronization of oscillatory neuronal populations, with recent results pointing to beta-frequency oscillations as particularly important for top-down processing. However, it remains to be determined if top-down beta-frequency oscillations indeed convey behavioral context. We measured spectral Granger Causality (sGC) using local field potentials recorded from microelectrodes chronically implanted in visual areas V1/V2, V4, and TEO of two rhesus macaque monkeys, and applied multivariate pattern analysis to the spatial patterns of top-down sGC. We decoded behavioral context by discriminating patterns of top-down (V4/TEO-to-V1/V2) beta-peak sGC for two different task rules governing correct responses to identical visual stimuli. The results indicate that top-down directed influences are carried to visual cortex by beta oscillations, and differentiate task demands even before visual stimulus processing. They suggest that top-down beta-frequency oscillatory processes coordinate processing of sensory information by conveying global knowledge states to early levels of the sensory cortical hierarchy independently of bottom-up stimulus-driven processing.

  16. Varieties of cognitive penetration in visual perception.

    PubMed

    Vetter, Petra; Newen, Albert

    2014-07-01

    Is our perceptual experience a veridical representation of the world or is it a product of our beliefs and past experiences? Cognitive penetration describes the influence of higher level cognitive factors on perceptual experience and has been a debated topic in philosophy of mind and cognitive science. Here, we focus on visual perception, particularly early vision, and how it is affected by contextual expectations and memorized cognitive contents. We argue for cognitive penetration based on recent empirical evidence demonstrating contextual and top-down influences on early visual processes. On the basis of a perceptual model, we propose different types of cognitive penetration depending on the processing level on which the penetration happens and depending on where the penetrating influence comes from. Our proposal has two consequences: (1) the traditional controversy on whether cognitive penetration occurs or not is ill posed, and (2) a clear-cut perception-cognition boundary cannot be maintained. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Remembering the Specific Visual Details of Presented Objects: Neuroimaging Evidence for Effects of Emotion

    ERIC Educational Resources Information Center

    Kensinger, Elizabeth A.; Schacter, Daniel L.

    2007-01-01

    Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…

  18. Haptic guidance of overt visual attention.

    PubMed

    List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2014-11-01

    Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.

  19. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task

    PubMed Central

    Cuperlier, Nicolas; Gaussier, Philippe

    2017-01-01

    Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291

  20. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

    PubMed

    Towal, R Blythe; Mormann, Milica; Koch, Christof

    2013-10-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

  1. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice

    PubMed Central

    Towal, R. Blythe; Mormann, Milica; Koch, Christof

    2013-01-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496

  2. Top-down modulation of ventral occipito-temporal responses during visual word recognition.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T

    2011-04-01

    Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  4. Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry

    PubMed Central

    Jaworska, Katarzyna; Lages, Martin

    2014-01-01

    Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063

  5. The Impact of Visualizations in Promoting Informed Natural Resource Decisions

    ERIC Educational Resources Information Center

    Turner, Sheldon

    2013-01-01

    The research in this dissertation was conducted in order to understand the ways in which scientific visualizations can influence the decision process of non-scientists. A wide variety of classical and novel methods were used in order to capture and analyze the decision process. Data were collected from non-scientists through role-play interviews…

  6. Visual Grouping in Accordance With Utterance Planning Facilitates Speech Production.

    PubMed

    Zhao, Liming; Paterson, Kevin B; Bai, Xuejun

    2018-01-01

    Research on language production has focused on the process of utterance planning and involved studying the synchronization between visual gaze and the production of sentences that refer to objects in the immediate visual environment. However, it remains unclear how the visual grouping of these objects might influence this process. To shed light on this issue, the present research examined the effects of the visual grouping of objects in a visual display on utterance planning in two experiments. Participants produced utterances of the form "The snail and the necklace are above/below/on the left/right side of the toothbrush" for objects containing these referents (e.g., a snail, a necklace and a toothbrush). These objects were grouped using classic Gestalt principles of color similarity (Experiment 1) and common region (Experiment 2) so that the induced perceptual grouping was congruent or incongruent with the required phrasal organization. The results showed that speech onset latencies were shorter in congruent than incongruent conditions. The findings therefore reveal that the congruency between the visual grouping of referents and the required phrasal organization can influence speech production. Such findings suggest that, when language is produced in a visual context, speakers make use of both visual and linguistic cues to plan utterances.

  7. Independence between implicit and explicit processing as revealed by the Simon effect.

    PubMed

    Lo, Shih-Yu; Yeh, Su-Ling

    2011-09-01

    Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Human visual perceptual organization beats thinking on speed.

    PubMed

    van der Helm, Peter A

    2017-05-01

    What is the degree to which knowledge influences visual perceptual processes? This question, which is central to the seeing-versus-thinking debate in cognitive science, is often discussed using examples claimed to be proof of one stance or another. It has, however, also been muddled by the usage of different and unclear definitions of perception. Here, for the well-defined process of perceptual organization, I argue that including speed (or efficiency) into the equation opens a new perspective on the limits of top-down influences of thinking on seeing. While the input of the perceptual organization process may be modifiable and its output enrichable, the process itself seems so fast (or efficient) that thinking hardly has time to intrude and is effective mostly after the fact.

  9. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    PubMed

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  10. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    PubMed Central

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  11. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  12. Beyond Phonology: Visual Processes Predict Alphanumeric and Nonalphanumeric Rapid Naming in Poor Early Readers

    ERIC Educational Resources Information Center

    Kruk, Richard S.; Luther Ruban, Cassia

    2018-01-01

    Visual processes in Grade 1 were examined for their predictive influences in nonalphanumeric and alphanumeric rapid naming (RAN) in 51 poor early and 69 typical readers. In a lagged design, children were followed longitudinally from Grade 1 to Grade 3 over 5 testing occasions. RAN outcomes in early Grade 2 were predicted by speeded and nonspeeded…

  13. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  14. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  15. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Internal model of gravity influences configural body processing.

    PubMed

    Barra, Julien; Senot, Patrice; Auclair, Laurent

    2017-01-01

    Human bodies are processed by a configural processing mechanism. Evidence supporting this claim is the body inversion effect, in which inversion impairs recognition of bodies more than other objects. Biomechanical configuration, as well as both visual and embodied expertise, has been demonstrated to play an important role in this effect. Nevertheless, the important factor of body inversion effect may also be linked to gravity orientation since gravity is one of the most fundamental constraints of our biology, behavior, and perception on Earth. The visual presentation of an inverted body in a typical body inversion paradigm turns the observed body upside down but also inverts the implicit direction of visual gravity in the scene. The orientation of visual gravity is then in conflict with the direction of actual gravity and may influence configural processing. To test this hypothesis, we dissociated the orientations of the body and of visual gravity by manipulating body posture. In a pretest we showed that it was possible to turn an avatar upside down (inversion relative to retinal coordinates) without inverting the orientation of visual gravity when the avatar stands on his/her hands. We compared the inversion effect in typical conditions (with gravity conflict when the avatar is upside down) to the inversion effect in conditions with no conflict between visual and physical gravity. The results of our experiment revealed that the inversion effect, as measured by both error rate and reaction time, was strongly reduced when there was no gravity conflict. Our results suggest that when an observed body is upside down (inversion relative to participants' retinal coordinates) but the orientation of visual gravity is not, configural processing of bodies might still be possible. In this paper, we discuss the implications of an internal model of gravity in the configural processing of observed bodies. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Cross-modal perceptual load: the impact of modality and individual differences.

    PubMed

    Sandhu, Rajwant; Dyson, Benjamin James

    2016-05-01

    Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.

  18. Interactions between motion and form processing in the human visual system.

    PubMed

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  19. Interactions between motion and form processing in the human visual system

    PubMed Central

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286

  20. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    PubMed

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. The influence of visual contrast and case changes on parafoveal preview benefits during reading.

    PubMed

    Wang, Chin-An; Inhoff, Albrecht W

    2010-04-01

    Reingold and Rayner (2006) showed that the visual contrast of a fixated target word influenced its viewing duration, but not the viewing of the next (posttarget) word in the text that was shown in regular contrast. Configurational target changes, by contrast, influenced target and posttarget viewing. The current study examined whether this effect pattern can be attributed to differential processing of the posttarget word during target viewing. A boundary paradigm (Rayner, 1975) was used to provide an informative or uninformative posttarget preview and to reveal the word when it was fixated. Consistent with the earlier study, more time was spent viewing the target when its visual contrast was low and its configuration unfamiliar. Critically, target contrast had no effect on the acquisition of useful information from a posttarget preview, but an unfamiliar target configuration diminished the usefulness of an informative posttarget preview. These findings are consistent with Reingold and Rayner's (2006) claim that saccade programming and attention shifting during reading can be controlled by functionally distinct word recognition processes.

  2. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  3. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  4. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  5. Visual search performance during simulated radar observation with and without sweepline.

    DOT National Transportation Integrated Search

    1979-01-01

    A study was conducted to determine whether or not the presence or absence of a radar sweepline influences attentional processes and, hence, the speed with which critical stimuli can be detected. The visual display was designed to approximate an advan...

  6. Influence of genetic background on anthocyanin and copigment composition and behavior during thermoalkaline processing of maize

    USDA-ARS?s Scientific Manuscript database

    Visual color is a primary factor for foods purchase; identifying factors that influence in-situ color quality of pigmented maize during processing is important. We used 24 genetically distinct pigmented maize hybrids (red/blue, blue, red, and purple) to investigate the effect of pigment and copigme...

  7. Determining the Motor Skills Development of Mentally Retarded Children through the Contribution of Visual Arts

    ERIC Educational Resources Information Center

    Erim, Gonca; Caferoglu, Müge

    2017-01-01

    Visual arts education is a process that helps the reflection of inner worlds, socialization via group works and healthier motor skills development of normally developing or handicapped children like the mentally retarded. This study aims to determine the influence of visual art studies on the motor skills development of primary school first grade…

  8. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  9. The influence of steroid sex hormones on the cognitive and emotional processing of visual stimuli in humans.

    PubMed

    Little, Anthony C

    2013-10-01

    Steroid sex hormones are responsible for some of the differences between men and women. In this article, I review evidence that steroid sex hormones impact on visual processing. Given prominent sex-differences, I focus on three topics for sex hormone effects for which there is most research available: 1. Preference and mate choice, 2. Emotion and recognition, and 3. Cerebral/perceptual asymmetries and visual-spatial abilities. For each topic, researchers have examined sex hormones and visual processing using various methods. I review indirect evidence addressing variation according to: menstrual cycle phase, pregnancy, puberty, and menopause. I further address studies of variation in testosterone and a measure of prenatal testosterone, 2D:4D, on visual processing. The most conclusive evidence, however, comes from experiments. Studies in which hormones are administrated are discussed. Overall, many studies demonstrate that sex steroids are associated with visual processing. However, findings are sometimes inconsistent, differences in methodology make strong comparisons between studies difficult, and we generally know more about activational than organizational effects. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Concurrent visual and tactile steady-state evoked potentials index allocation of inter-modal attention: a frequency-tagging study.

    PubMed

    Porcu, Emanuele; Keitel, Christian; Müller, Matthias M

    2013-11-27

    We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    PubMed

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Abnormal late visual responses and alpha oscillations in neurofibromatosis type 1: a link to visual and attention deficits

    PubMed Central

    2014-01-01

    Background Neurofibromatosis type 1 (NF1) affects several areas of cognitive function including visual processing and attention. We investigated the neural mechanisms underlying the visual deficits of children and adolescents with NF1 by studying visual evoked potentials (VEPs) and brain oscillations during visual stimulation and rest periods. Methods Electroencephalogram/event-related potential (EEG/ERP) responses were measured during visual processing (NF1 n = 17; controls n = 19) and idle periods with eyes closed and eyes open (NF1 n = 12; controls n = 14). Visual stimulation was chosen to bias activation of the three detection mechanisms: achromatic, red-green and blue-yellow. Results We found significant differences between the groups for late chromatic VEPs and a specific enhancement in the amplitude of the parieto-occipital alpha amplitude both during visual stimulation and idle periods. Alpha modulation and the negative influence of alpha oscillations in visual performance were found in both groups. Conclusions Our findings suggest abnormal later stages of visual processing and enhanced amplitude of alpha oscillations supporting the existence of deficits in basic sensory processing in NF1. Given the link between alpha oscillations, visual perception and attention, these results indicate a neural mechanism that might underlie the visual sensitivity deficits and increased lapses of attention observed in individuals with NF1. PMID:24559228

  13. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  14. Retinal ganglion cell maps in the brain: implications for visual processing.

    PubMed

    Dhande, Onkar S; Huberman, Andrew D

    2014-02-01

    Everything the brain knows about the content of the visual world is built from the spiking activity of retinal ganglion cells (RGCs). As the output neurons of the eye, RGCs include ∼20 different subtypes, each responding best to a specific feature in the visual scene. Here we discuss recent advances in identifying where different RGC subtypes route visual information in the brain, including which targets they connect to and how their organization within those targets influences visual processing. We also highlight examples where causal links have been established between specific RGC subtypes, their maps of central connections and defined aspects of light-mediated behavior and we suggest the use of techniques that stand to extend these sorts of analyses to circuits underlying visual perception. Copyright © 2013. Published by Elsevier Ltd.

  15. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  16. Top-down dimensional weight set determines the capture of visual attention: evidence from the PCN component.

    PubMed

    Töllner, Thomas; Müller, Hermann J; Zehetleitner, Michael

    2012-07-01

    Visual search for feature singletons is slowed when a task-irrelevant, but more salient distracter singleton is concurrently presented. While there is a consensus that this distracter interference effect can be influenced by internal system settings, it remains controversial at what stage of processing this influence starts to affect visual coding. Advocates of the "stimulus-driven" view maintain that the initial sweep of visual processing is entirely driven by physical stimulus attributes and that top-down settings can bias visual processing only after selection of the most salient item. By contrast, opponents argue that top-down expectancies can alter the initial selection priority, so that focal attention is "not automatically" shifted to the location exhibiting the highest feature contrast. To precisely trace the allocation of focal attention, we analyzed the Posterior-Contralateral-Negativity (PCN) in a task in which the likelihood (expectancy) with which a distracter occurred was systematically varied. Our results show that both high (vs. low) distracter expectancy and experiencing a distracter on the previous trial speed up the timing of the target-elicited PCN. Importantly, there was no distracter-elicited PCN, indicating that participants did not shift attention to the distracter before selecting the target. This pattern unambiguously demonstrates that preattentive vision is top-down modifiable.

  17. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    PubMed

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  18. Time course of influence on the allocation of attentional resources caused by unconscious fearful faces.

    PubMed

    Jiang, Yunpeng; Wu, Xia; Saab, Rami; Xiao, Yi; Gao, Xiaorong

    2018-05-01

    Emotionally affective stimuli have priority in our visual processing even in the absence of conscious processing. However, the influence of unconscious emotional stimuli on our attentional resources remains unclear. Using the continuous flash suppression (CFS) paradigm, we concurrently recorded and analyzed visual event-related potential (ERP) components evoked by the images of suppressed fearful and neutral faces, and the steady-state visual evoked potential (SSVEP) elicited by dynamic Mondrian pictures. Fearful faces, relative to neutral faces, elicited larger late ERP components on parietal electrodes, indicating emotional expression processing without consciousness. More importantly, the presentation of a suppressed fearful face in the CFS resulted in a significantly greater decrease in SSVEP amplitude which started about 1-1.2 s after the face images first appeared. This suggests that the time course of the attentional bias occurs at about 1 s after the appearance of the fearful face and demonstrates that unconscious fearful faces may influence attentional resource allocation. Moreover, we proposed a new method that could eliminate the interaction of ERPs and SSVEPs when recorded concurrently. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. The Influence of verbalization on the pattern of cortical activation during mental arithmetic

    PubMed Central

    2012-01-01

    Background The aim of the present functional magnetic resonance imaging (fMRI) study at 3 T was to investigate the influence of the verbal-visual cognitive style on cerebral activation patterns during mental arithmetic. In the domain of arithmetic, a visual style might for example mean to visualize numbers and (intermediate) results, and a verbal style might mean, that numbers and (intermediate) results are verbally repeated. In this study, we investigated, first, whether verbalizers show activations in areas for language processing, and whether visualizers show activations in areas for visual processing during mental arithmetic. Some researchers have proposed that the left and right intraparietal sulcus (IPS), and the left angular gyrus (AG), two areas involved in number processing, show some domain or modality specificity. That is, verbal for the left AG, and visual for the left and right IPS. We investigated, second, whether the activation in these areas implied in number processing depended on an individual's cognitive style. Methods 42 young healthy adults participated in the fMRI study. The study comprised two functional sessions. In the first session, subtraction and multiplication problems were presented in an event-related design, and in the second functional session, multiplications were presented in two formats, as Arabic numerals and as written number words, in an event-related design. The individual's habitual use of visualization and verbalization during mental arithmetic was assessed by a short self-report assessment. Results We observed in both functional sessions that the use of verbalization predicts activation in brain areas associated with language (supramarginal gyrus) and auditory processing (Heschl's gyrus, Rolandic operculum). However, we found no modulation of activation in the left AG as a function of verbalization. Conclusions Our results confirm that strong verbalizers use mental speech as a form of mental imagination more strongly than weak verbalizers. Moreover, our results suggest that the left AG has no specific affinity to the verbal domain and subserves number processing in a modality-general way. PMID:22404872

  20. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  1. Death anxiety and visual oculomotor processing of arousing stimuli in a free view setting.

    PubMed

    Wendelberg, Linda; Volden, Frode; Yildirim-Yayilgan, Sule

    2017-04-01

    The main goal of this study was to determine how death anxiety (DA) affects visual processing when confronted with arousing stimuli. A total of 26 males and females were primed with either DA or a neutral primer and were given a free view/free choice task where eye movement was measured using an eye tracker. The goal was to identify measurable/observable indicators of whether the subjects were under the influence of DA during the free view. We conducted an eye tracking study because this is an area where we believe it is possible to find observable indicators. Ultimately, we observed some changes in the visual behavior, such as a prolonged average latency, altered sensitivity to the repetition of stimuli, longer fixations, less time in saccadic activity, and fewer classifications related to focal and ambient processing, which appear to occur under the influence of DA when the subjects are confronted with arousing stimuli. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  2. Global processing takes time: A meta-analysis on local-global visual processing in ASD.

    PubMed

    Van der Hallen, Ruth; Evers, Kris; Brewaeys, Katrien; Van den Noortgate, Wim; Wagemans, Johan

    2015-05-01

    What does an individual with autism spectrum disorder (ASD) perceive first: the forest or the trees? In spite of 30 years of research and influential theories like the weak central coherence (WCC) theory and the enhanced perceptual functioning (EPF) account, the interplay of local and global visual processing in ASD remains only partly understood. Research findings vary in indicating a local processing bias or a global processing deficit, and often contradict each other. We have applied a formal meta-analytic approach and combined 56 articles that tested about 1,000 ASD participants and used a wide range of stimuli and tasks to investigate local and global visual processing in ASD. Overall, results show no enhanced local visual processing nor a deficit in global visual processing. Detailed analysis reveals a difference in the temporal pattern of the local-global balance, that is, slow global processing in individuals with ASD. Whereas task-dependent interaction effects are obtained, gender, age, and IQ of either participant groups seem to have no direct influence on performance. Based on the overview of the literature, suggestions are made for future research. (c) 2015 APA, all rights reserved).

  3. Understanding the visual resource

    Treesearch

    Floyd L. Newby

    1971-01-01

    Understanding our visual resources involves a complex interweaving of motivation and cognitive recesses; but, more important, it requires that we understand and can identify those characteristics of a landscape that influence the image formation process. From research conducted in Florida, three major variables were identified that appear to have significant effect...

  4. Identity-expression interaction in face perception: sex, visual field, and psychophysical factors.

    PubMed

    Godard, Ornella; Baudouin, Jean-Yves; Bonnet, Philippe; Fiori, Nicole

    2013-01-01

    We investigated the psychophysical factors underlying the identity-emotion interaction in face perception. Visual field and sex were also taken into account. Participants had to judge whether a probe face, presented in either the left or the right visual field, and a central target face belonging to same person while emotional expression varied (Experiment 1) or to judge whether probe and target faces expressed the same emotion while identity was manipulated (Experiment 2). For accuracy we replicated the mutual facilitation effect between identity and emotion; no sex or hemispheric differences were found. Processing speed measurements, however, showed a lesser degree of interference in women than in men, especially for matching identity when faces expressed different emotions after a left visual presentation probe face. Psychophysical indices can be used to determine whether these effects are perceptual (A') or instead arise at a post-perceptual decision-making stage (B"). The influence of identity on the processing of facial emotion seems to be due to perceptual factors, whereas the influence of emotion changes on identity processing seems to be related to decisional factors. In addition, men seem to be more "conservative" after a LVF/RH probe-face presentation when processing identity. Women seem to benefit from better abilities to extract facial invariant aspects relative to identity.

  5. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  6. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  7. Retinotopically specific reorganization of visual cortex for tactile pattern recognition

    PubMed Central

    Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.

    2009-01-01

    Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999

  8. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    PubMed

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  9. The effect of acute sleep deprivation on visual evoked potentials in professional drivers.

    PubMed

    Jackson, Melinda L; Croft, Rodney J; Owens, Katherine; Pierce, Robert J; Kennedy, Gerard A; Crewther, David; Howard, Mark E

    2008-09-01

    Previous studies have demonstrated that as little as 18 hours of sleep deprivation can cause deleterious effects on performance. It has also been suggested that sleep deprivation can cause a "tunnel-vision" effect, in which attention is restricted to the center of the visual field. The current study aimed to replicate these behavioral effects and to examine the electrophysiological underpinnings of these changes. Repeated-measures experimental study. University laboratory. Nineteen professional drivers (1 woman; mean age = 45.3 +/- 9.1 years). Two experimental sessions were performed; one following 27 hours of sleep deprivation and the other following a normal night of sleep, with control for circadian effects. A tunnel-vision task (central versus peripheral visual discrimination) and a standard checkerboard-viewing task were performed while 32-channel EEG was recorded. For the tunnel-vision task, sleep deprivation resulted in an overall slowing of reaction times and increased errors of omission for both peripheral and foveal stimuli (P < 0.05). These changes were related to reduced P300 amplitude (indexing cognitive processing) but not measures of early visual processing. No evidence was found for an interaction effect between sleep deprivation and visual-field position, either in terms of behavior or electrophysiological responses. Slower processing of the sustained parvocellular visual pathway was demonstrated. These findings suggest that performance deficits on visual tasks during sleep deprivation are due to higher cognitive processes rather than early visual processing. Sleep deprivation may differentially impair processing of more-detailed visual information. Features of the study design (eg, visual angle, duration of sleep deprivation) may influence whether peripheral visual-field neglect occurs.

  10. Audio-Visual Speech in Noise Perception in Dyslexia

    ERIC Educational Resources Information Center

    van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean

    2018-01-01

    Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a…

  11. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  12. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Irrelevant reward and selection histories have different influences on task-relevant attentional selection.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-07-01

    Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.

  14. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  15. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences.

    PubMed

    Lee Masson, Haemy; Bulthé, Jessica; Op de Beeck, Hans P; Wallraven, Christian

    2016-08-01

    Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. EEG reveals an early influence of social conformity on visual processing in group pressure situations.

    PubMed

    Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried

    2013-01-01

    Humans are social beings and often have to perceive and perform within groups. In conflict situations, this puts them under pressure to either adhere to the group opinion or to risk controversy with the group. Psychological experiments have demonstrated that study participants adapt to erroneous group opinions in visual perception tasks, which they can easily solve correctly when performing on their own. Until this point, however, it is unclear whether this phenomenon of social conformity influences early stages of perception that might not even reach awareness or later stages of conscious decision-making. Using electroencephalography, this study has revealed that social conformity to the wrong group opinion resulted in a decrease of the posterior-lateral P1 in line with a decrease of the later centro-parietal P3. These results suggest that group pressure situations impact early unconscious visual perceptual processing, which results in a later diminished stimulus discrimination and an adaptation even to the wrong group opinion. These findings might have important implications for understanding social behavior in group settings and are discussed within the framework of social influence on eyewitness testimony.

  17. Where's Wally: the influence of visual salience on referring expression generation.

    PubMed

    Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah

    2013-01-01

    REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  18. Classroom Interpreting and Visual Information Processing in Mainstream Education for Deaf Students: Live or Memorex®?

    PubMed Central

    Marschark, Marc; Pelz, Jeff B.; Convertino, Carol; Sapere, Patricia; Arndt, Mary Ellen; Seewagen, Rosemarie

    2006-01-01

    This study examined visual information processing and learning in classrooms including both deaf and hearing students. Of particular interest were the effects on deaf students’ learning of live (three-dimensional) versus video-recorded (two-dimensional) sign language interpreting and the visual attention strategies of more and less experienced deaf signers exposed to simultaneous, multiple sources of visual information. Results from three experiments consistently indicated no differences in learning between three-dimensional and two-dimensional presentations among hearing or deaf students. Analyses of students’ allocation of visual attention and the influence of various demographic and experimental variables suggested considerable flexibility in deaf students’ receptive communication skills. Nevertheless, the findings also revealed a robust advantage in learning in favor of hearing students. PMID:16628250

  19. Spontaneous in-flight accommodation of hand orientation to unseen grasp targets: A case of action blindsight.

    PubMed

    Prentiss, Emily K; Schneider, Colleen L; Williams, Zoë R; Sahin, Bogachan; Mahon, Bradford Z

    2018-03-15

    The division of labour between the dorsal and ventral visual pathways is well established. The ventral stream supports object identification, while the dorsal stream supports online processing of visual information in the service of visually guided actions. Here, we report a case of an individual with a right inferior quadrantanopia who exhibited accurate spontaneous rotation of his wrist when grasping a target object in his blind visual field. His accurate wrist orientation was observed despite the fact that he exhibited no sensitivity to the orientation of the handle in a perceptual matching task. These findings indicate that non-geniculostriate visual pathways process basic volumetric information relevant to grasping, and reinforce the observation that phenomenal awareness is not necessary for an object's volumetric properties to influence visuomotor performance.

  20. Selection and response bias as determinants of priming of pop-out search: Revelations from diffusion modeling.

    PubMed

    Burnham, Bryan R

    2018-05-03

    During visual search, both top-down factors and bottom-up properties contribute to the guidance of visual attention, but selection history can influence attention independent of bottom-up and top-down factors. For example, priming of pop-out (PoP) is the finding that search for a singleton target is faster when the target and distractor features repeat than when those features trade roles between trials. Studies have suggested that such priming (selection history) effects on pop-out search manifest either early, by biasing the selection of the preceding target feature, or later in processing, by facilitating response and target retrieval processes. The present study was designed to examine the influence of selection history on pop-out search by introducing a speed-accuracy trade-off manipulation in a pop-out search task. Ratcliff diffusion modeling (RDM) was used to examine how selection history influenced both attentional bias and response execution processes. The results support the hypothesis that selection history biases attention toward the preceding target's features on the current trial and also influences selection of the response to the target.

  1. Disentangling How the Brain is "Wired" in Cortical (Cerebral) Visual Impairment.

    PubMed

    Merabet, Lotfi B; Mayer, D Luisa; Bauer, Corinna M; Wright, Darick; Kran, Barry S

    2017-05-01

    Cortical (cerebral) visual impairment (CVI) results from perinatal injury to visual processing structures and pathways of the brain and is the most common cause of severe visual impairment or blindness in children in developed countries. Children with CVI display a wide range of visual deficits including decreased visual acuity, impaired visual field function, as well as impairments in higher-order visual processing and attention. Together, these visual impairments can dramatically influence a child's development and well-being. Given the complex neurologic underpinnings of this condition, CVI is often undiagnosed by eye care practitioners. Furthermore, the neurophysiological basis of CVI in relation to observed visual processing deficits remains poorly understood. Here, we present some of the challenges associated with the clinical assessment and management of individuals with CVI. We discuss how advances in brain imaging are likely to help uncover the underlying neurophysiology of this condition. In particular, we demonstrate how structural and functional neuroimaging approaches can help gain insight into abnormalities of white matter connectivity and cortical activation patterns, respectively. Establishing a connection between how changes within the brain relate to visual impairments in CVI will be important for developing effective rehabilitative and education strategies for individuals living with this condition. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Involuntary orienting of attention to a sound desynchronizes the occipital alpha rhythm and improves visual perception.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2017-04-15

    Directing attention voluntarily to the location of a visual target results in an amplitude reduction (desynchronization) of the occipital alpha rhythm (8-14Hz), which is predictive of improved perceptual processing of the target. Here we investigated whether modulations of the occipital alpha rhythm triggered by the involuntary orienting of attention to a salient but spatially non-predictive sound would similarly influence perception of a subsequent visual target. Target discrimination was more accurate when a sound preceded the target at the same location (validly cued trials) than when the sound was on the side opposite to the target (invalidly cued trials). This behavioral effect was accompanied by a sound-induced desynchronization of the alpha rhythm over the lateral occipital scalp. The magnitude of alpha desynchronization over the hemisphere contralateral to the sound predicted correct discriminations of validly cued targets but not of invalidly cued targets. These results support the conclusion that cue-induced alpha desynchronization over the occipital cortex is a manifestation of a general priming mechanism that improves visual processing and that this mechanism can be activated either by the voluntary or involuntary orienting of attention. Further, the observed pattern of alpha modulations preceding correct and incorrect discriminations of valid and invalid targets suggests that involuntary orienting to the non-predictive sound has a rapid and purely facilitatory influence on processing targets on the cued side, with no inhibitory influence on targets on the opposite side. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  4. Visual grouping under isoluminant condition: impact of mental fatigue

    NASA Astrophysics Data System (ADS)

    Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta

    2016-09-01

    Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.

  5. The influence of visual and phonological features on the hemispheric processing of hierarchical Navon letters.

    PubMed

    Aiello, Marilena; Merola, Sheila; Lasaponara, Stefano; Pinto, Mario; Tomaiuolo, Francesco; Doricchi, Fabrizio

    2018-01-31

    The possibility of allocating attentional resources to the "global" shape or to the "local" details of pictorial stimuli helps visual processing. Investigations with hierarchical Navon letters, that are large "global" letters made up of small "local" ones, consistently demonstrate a right hemisphere advantage for global processing and a left hemisphere advantage for local processing. Here we investigated how the visual and phonological features of the global and local components of Navon letters influence these hemispheric advantages. In a first study in healthy participants, we contrasted the hemispheric processing of hierarchical letters with global and local items competing for response selection, to the processing of hierarchical letters in which a letter, a false-letter conveying no phonological information or a geometrical shape presented at the unattended level did not compete for response selection. In a second study, we investigated the hemispheric processing of hierarchical stimuli in which global and local letters were both visually and phonologically congruent (e.g. large uppercase G made of smaller uppercase G), visually incongruent and phonologically congruent (e.g. large uppercase G made of small lowercase g) or visually incongruent and phonologically incongruent (e.g. large uppercase G made of small lowercase or uppercase M). In a third study, we administered the same tasks to a right brain damaged patient with a lesion involving pre-striate areas engaged by global processing. The results of the first two experiments showed that the global abilities of the left hemisphere are limited because of its strong susceptibility to interference from local letters even when these are irrelevant to the task. Phonological features played a crucial role in this interference because the interference was entirely maintained also when letters at the global and local level were presented in different uppercase vs. lowercase formats. In contrast, when local features conveyed no phonological information, the left hemisphere showed preserved global processing abilities. These findings were supported by the study of the right brain damaged patient. These results offer a new look at the hemispheric dominance in the attentional processing of the global and local levels of hierarchical stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Association of visual sensory function and higher order visual processing skills with incident driving cessation

    PubMed Central

    Huisingh, Carrie; McGwin, Gerald; Owsley, Cynthia

    2017-01-01

    Background Many studies on vision and driving cessation have relied on measures of sensory function, which are insensitive to the higher order cognitive aspects of visual processing. The purpose of this study was to examine the association between traditional measures of visual sensory function and higher order visual processing skills with incident driving cessation in a population-based sample of older drivers. Methods Two thousand licensed drivers aged ≥70 were enrolled and followed-up for three years. Tests for central vision and visual processing were administered at baseline and included visual acuity, contrast sensitivity, sensitivity in the driving visual field, visual processing speed (Useful Field of View (UFOV) Subtest 2 and Trails B), and spatial ability measured by the Visual Closure Subtest of the Motor-free Visual Perception Test. Participants self-reported the month and year of driving cessation and provided a reason for cessation. Cox proportional hazards models were used to generate crude and adjusted hazard ratios with 95% confidence intervals between visual functioning characteristics and risk of driving cessation over a three-year period. Results During the study period, 164 participants stopped driving which corresponds to a cumulative incidence of 8.5%. Impaired contrast sensitivity, visual fields, visual processing speed (UFOVand Trails B), and spatial ability were significant risk factors for subsequent driving cessation after adjusting for age, gender, marital status, number of medical conditions, and miles driven. Visual acuity impairment was not associated with driving cessation. Medical problems (63%), specifically musculoskeletal and neurological problems, as well as vision problems (17%) were cited most frequently as the reason for driving cessation. Conclusion Assessment of cognitive and visual functioning can provide useful information about subsequent risk of driving cessation among older drivers. In addition, a variety of factors, not just vision, influenced the decision to stop driving and may be amenable to intervention. PMID:27353969

  7. Exploring the influence of encoding format on subsequent memory.

    PubMed

    Turney, Indira C; Dennis, Nancy A; Maillet, David; Rajah, M Natasha

    2017-05-01

    Distinctive encoding is greatly influenced by gist-based processes and has been shown to suffer when highly similar items are presented in close succession. Thus, elucidating the mechanisms underlying how presentation format affects gist processing is essential in determining the factors that influence these encoding processes. The current study utilised multivariate partial least squares (PLS) analysis to identify encoding networks directly associated with retrieval performance in a blocked and intermixed presentation condition. Subsequent memory analysis for successfully encoded items indicated no significant differences between reaction time and retrieval performance and presentation format. Despite no significant behavioural differences, behaviour PLS revealed differences in brain-behaviour correlations and mean condition activity in brain regions associated with gist-based vs. distinctive encoding. Specifically, the intermixed format encouraged more distinctive encoding, showing increased activation of regions associated with strategy use and visual processing (e.g., frontal and visual cortices, respectively). Alternatively, the blocked format exhibited increased gist-based processes, accompanied by increased activity in the right inferior frontal gyrus. Together, results suggest that the sequence that information is presented during encoding affects the degree to which distinctive encoding is engaged. These findings extend our understanding of the Fuzzy Trace Theory and the role of presentation format on encoding processes.

  8. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task.

    PubMed

    Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.

  9. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task

    PubMed Central

    Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536

  10. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection during Visual Search?

    ERIC Educational Resources Information Center

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by…

  11. Visual Information Processing and Response Time in Traffic-Signal Cognition

    DTIC Science & Technology

    1992-03-01

    TYPE AND DATES COVERED IMarch 1992 IMaster’s Thesis 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS VISUAL INEORMATION PIRXC ING AND RESPOSE TIME IN TRAFFIC...34The Influence of the Time Duration of Yellow Traffic Signals On Driver Response," ITE Journal, (November 1980). 145 William D. Kosnic. " Self

  12. Electrocortical consequences of image processing: The influence of working memory load and worry.

    PubMed

    White, Evan J; Grant, DeMond M

    2017-03-30

    Research suggests that worry precludes emotional processing as well as biases attentional processes. Although there is burgeoning evidence for the relationship between executive functioning and worry, more research in this area is needed. A recent theory suggests one mechanism for the negative effects of worry on neural indicators of attention may be working memory load, however few studies have examined this directly. The goal of the current study was to document the influence of both visual and verbal working memory load and worry on attention allocation during processing of emotional images in a cued image paradigm. It was hypothesized that working memory load will decrease attention allocation during processing of emotional images. This was tested among 38 participants using a modified S1-S2 paradigm. Results indicated that both the visual and verbal working memory tasks resulted in a reduction of attention allocation to the processing of images across stimulus types compared to the baseline task, although only for individuals low in worry. These data extend the literature by documenting decreased neural responding (i.e., LPP amplitude) to imagery both the visual and verbal working memory load, particularly among individuals low in worry. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  13. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  14. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    PubMed Central

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  15. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  16. The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.

    PubMed

    Tavares, Gabriela; Perona, Pietro; Rangel, Antonio

    2017-01-01

    Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.

  17. Influence of visual path information on human heading perception during rotation.

    PubMed

    Li, Li; Chen, Jing; Peng, Xiaozhe

    2009-03-31

    How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.

  18. Aging and the rate of visual information processing.

    PubMed

    Guest, Duncan; Howard, Christina J; Brown, Louise A; Gleeson, Harriet

    2015-01-01

    Multiple methods exist for measuring how age influences the rate of visual information processing. The most advanced methods model the processing dynamics in a task in order to estimate processing rates independently of other factors that might be influenced by age, such as overall performance level and the time at which processing onsets. However, such modeling techniques have produced mixed evidence for age effects. Using a time-accuracy function (TAF) analysis, Kliegl, Mayr, and Krampe (1994) showed clear evidence for age effects on processing rate. In contrast, using the diffusion model to examine the dynamics of decision processes, Ratcliff and colleagues (e.g., Ratcliff, Thapar, & McKoon, 2006) found no evidence for age effects on processing rate across a range of tasks. Examination of these studies suggests that the number of display stimuli might account for the different findings. In three experiments we measured the precision of younger and older adults' representations of target stimuli after different amounts of stimulus exposure. A TAF analysis found little evidence for age differences in processing rate when a single stimulus was presented (Experiment 1). However, adding three nontargets to the display resulted in age-related slowing of processing (Experiment 2). Similar slowing was observed when simply presenting two stimuli and using a post-cue to indicate the target (Experiment 3). Although there was some interference from distracting objects and from previous responses, these age-related effects on processing rate seem to reflect an age-related difficulty in processing multiple objects, particularly when encoding them into visual working memory.

  19. -The Influence of Scene Context on Parafoveal Processing of Objects.

    PubMed

    Castelhano, Monica S; Pereira, Effie J

    2017-04-21

    Many studies in reading have shown the enhancing effect of context on the processing of a word before it is directly fixated (parafoveal processing of words; Balota et al., 1985; Balota & Rayner, 1983; Ehrlich & Rayner, 1981). Here, we examined whether scene context influences the parafoveal processing of objects and enhances the extraction of object information. Using a modified boundary paradigm (Rayner, 1975), the Dot-Boundary paradigm, participants fixated on a suddenly-onsetting cue before the preview object would onset 4° away. The preview object could be identical to the target, visually similar, visually dissimilar, or a control (black rectangle). The preview changed to the target object once a saccade toward the object was made. Critically, the objects were presented on either a consistent or an inconsistent scene background. Results revealed that there was a greater processing benefit for consistent than inconsistent scene backgrounds and that identical and visually similar previews produced greater processing benefits than other previews. In the second experiment, we added an additional context condition in which the target location was inconsistent, but the scene semantics remained consistent. We found that changing the location of the target object disrupted the processing benefit derived from the consistent context. Most importantly, across both experiments, the effect of preview was not enhanced by scene context. Thus, preview information and scene context appear to independently boost the parafoveal processing of objects without any interaction from object-scene congruency.

  20. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    PubMed

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  1. A number-form area in the blind

    PubMed Central

    Abboud, Sami; Maidenbaum, Shachar; Dehaene, Stanislas; Amedi, Amir

    2015-01-01

    Distinct preference for visual number symbols was recently discovered in the human right inferior temporal gyrus (rITG). It remains unclear how this preference emerges, what is the contribution of shape biases to its formation and whether visual processing underlies it. Here we use congenital blindness as a model for brain development without visual experience. During fMRI, we present blind subjects with shapes encoded using a novel visual-to-music sensory-substitution device (The EyeMusic). Greater activation is observed in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols. Using resting-state fMRI in the blind and sighted, we further show that the areas with preference for numerals and letters exhibit distinct patterns of functional connectivity with quantity and language-processing areas, respectively. Our findings suggest that specificity in the ventral ‘visual’ stream can emerge independently of sensory modality and visual experience, under the influence of distinct connectivity patterns. PMID:25613599

  2. New insights into the role of motion and form vision in neurodevelopmental disorders.

    PubMed

    Johnston, Richard; Pitchford, Nicola J; Roach, Neil W; Ledgeway, Timothy

    2017-12-01

    A selective deficit in processing the global (overall) motion, but not form, of spatially extensive objects in the visual scene is frequently associated with several neurodevelopmental disorders, including preterm birth. Existing theories that proposed to explain the origin of this visual impairment are, however, challenged by recent research. In this review, we explore alternative hypotheses for why deficits in the processing of global motion, relative to global form, might arise. We describe recent evidence that has utilised novel tasks of global motion and global form to elucidate the underlying nature of the visual deficit reported in different neurodevelopmental disorders. We also examine the role of IQ and how the sex of an individual can influence performance on these tasks, as these are factors that are associated with performance on global motion tasks, but have not been systematically controlled for in previous studies exploring visual processing in clinical populations. Finally, we suggest that a new theoretical framework is needed for visual processing in neurodevelopmental disorders and present recommendations for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Situated sentence processing: the coordinated interplay account and a neurobehavioral model.

    PubMed

    Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R

    2010-03-01

    Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.

  4. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  5. Hemispheric processing of predictive inferences during reading: The influence of negatively emotional valenced stimuli.

    PubMed

    Virtue, Sandra; Schutzenhofer, Michael; Tomkins, Blaine

    2017-07-01

    Although a left hemisphere advantage is usually evident during language processing, the right hemisphere is highly involved during the processing of weakly constrained inferences. However, currently little is known about how the emotional valence of environmental stimuli influences the hemispheric processing of these inferences. In the current study, participants read texts promoting either strongly or weakly constrained predictive inferences and performed a lexical decision task to inference-related targets presented to the left visual field-right hemisphere or the right visual field-left hemisphere. While reading these texts, participants either listened to dissonant music (i.e., the music condition) or did not listen to music (i.e., the no music condition). In the no music condition, the left hemisphere showed an advantage for strongly constrained inferences compared to weakly constrained inferences, whereas the right hemisphere showed high facilitation for both strongly and weakly constrained inferences. In the music condition, both hemispheres showed greater facilitation for strongly constrained inferences than for weakly constrained inferences. These results suggest that negatively valenced stimuli (such as dissonant music) selectively influences the right hemisphere's processing of weakly constrained inferences during reading.

  6. Information theoretical assessment of visual communication with subband coding

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.

    1994-09-01

    A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.

  7. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  8. The integration processing of the visual and auditory information in videos of real-world events: an ERP study.

    PubMed

    Liu, Baolin; Wang, Zhongning; Jin, Zhixing

    2009-09-11

    In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.

  9. Dissociation between perceptual processing and priming in long-term lorazepam users.

    PubMed

    Giersch, Anne; Vidailhet, Pierre

    2006-12-01

    Acute effects of lorazepam on visual information processing, perceptual priming and explicit memory are well established. However, visual processing and perceptual priming have rarely been explored in long-term lorazepam users. By exploring these functions it was possible to test the hypothesis that difficulty in processing visual information may lead to deficiencies in perceptual priming. Using a simple blind procedure, we tested explicit memory, perceptual priming and visual perception in 15 long-term lorazepam users and 15 control subjects individually matched according to sex, age and education level. Explicit memory, perceptual priming, and the identification of fragmented pictures were found to be preserved in long-term lorazepam users, contrary to what is usually observed after an acute drug intake. The processing of visual contour, on the other hand, was still significantly impaired. These results suggest that the effects observed on low-level visual perception are independent of the acute deleterious effects of lorazepam on perceptual priming. A comparison of perceptual priming in subjects with low- vs. high-level identification of new fragmented pictures further suggests that the ability to identify fragmented pictures has no influence on priming. Despite the fact that they were treated with relatively low doses and far from peak plasma concentration, it is noteworthy that in long-term users memory was preserved.

  10. The Effect of Semantic Transparency on the Processing of Morphologically Derived Words: Evidence from Decision Latencies and Event-Related Potentials

    ERIC Educational Resources Information Center

    Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.

    2017-01-01

    Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…

  11. The Development of Verbal and Visual Working Memory Processes: A Latent Variable Approach

    ERIC Educational Resources Information Center

    Koppenol-Gonzalez, Gabriela V.; Bouwmeester, Samantha; Vermunt, Jeroen K.

    2012-01-01

    Working memory (WM) processing in children has been studied with different approaches, focusing on either the organizational structure of WM processing during development (factor analytic) or the influence of different task conditions on WM processing (experimental). The current study combined both approaches, aiming to distinguish verbal and…

  12. The Influence of Semantic Neighbours on Visual Word Recognition

    ERIC Educational Resources Information Center

    Yates, Mark

    2012-01-01

    Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…

  13. Transdisciplinary Dimensions in the Composing Activities of Children: Transfer of Strategies and Transformation of Knowledge

    ERIC Educational Resources Information Center

    Roels, Johanna Maria; Van Petegem, Peter

    2016-01-01

    Existing studies show the value of using visual expression as a means of teaching children to understand and create music. This study aspires to point out an additional valuable aspect, namely, the influence composing via visual expression--whereby children transform their own drawings--may have on children's subsequent compositional processes.…

  14. Developmental Shifts in Children's Sensitivity to Visual Speech: A New Multimodal Picture-Word Task

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Spence, Melanie J.; Tye-Murray, Nancy; Abdi, Herve

    2009-01-01

    This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously…

  15. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  16. Components of working memory and visual selective attention.

    PubMed

    Burnham, Bryan R; Sabia, Matthew; Langan, Catherine

    2014-02-01

    Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Biasing spatial attention with semantic information: an event coding approach.

    PubMed

    Amer, Tarek; Gozli, Davood G; Pratt, Jay

    2017-04-21

    We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.

  18. Effects of visual and verbal sexual television content and perceived realism on attitudes and beliefs.

    PubMed

    Tayler, Laramie D

    2005-05-01

    Previous studies of the effects of sexual television content have resulted in mixed findings. Based on the information processing model of media effects, I proposed that the messages embodied n such content, the degree to which viewers perceive television content as realistic, and whether sexual content is conveyed using visual or verbal symbols may influence the nature or degree of such effects. I explored this possibility through an experiment in which 182 college undergraduates were exposed to visual or verbal sexual television content, neutral television content, or no television at all prior to completing measures of sexual attitudes and beliefs. Although exposure to sexual content generally did not produce significant main effects, it did influence the attitudes of those who perceive television to be relatively realistic. Verbal sexual content was found to influence beliefs about women's sexual activity among the same group.

  19. Information Processing in the Cerebral Hemispheres: Selective Hemispheric Activation and Capacity Limitations.

    ERIC Educational Resources Information Center

    Hellige, Joseph B.; And Others

    1979-01-01

    Five experiments are reported concerning the effect on visual information processing of concurrently maintaining verbal information. The results suggest that the left cerebral hemisphere functions as a typical limited-capacity information processing system that can be influenced somewhat separately from the right hemisphere system. (Author/CTM)

  20. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres

    PubMed Central

    Ince, Robin A. A.; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J.; Rousselet, Guillaume A.; Schyns, Philippe G.

    2016-01-01

    A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. PMID:27550865

  1. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    PubMed

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  2. The Role of Visual Noise in Influencing Mental Load and Fatigue in a Steady-State Motion Visual Evoked Potential-Based Brain-Computer Interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang

    2017-08-14

    As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.

  3. Image and emotion: from outcomes to brain behavior.

    PubMed

    Nanda, Upali; Zhu, Xi; Jansen, Ben H

    2012-01-01

    A systematic review of neuroscience articles on the emotional states of fear, anxiety, and pain to understand how emotional response is linked to the visual characteristics of an image at the level of brain behavior. A number of outcome studies link exposure to visual images (with nature content) to improvements in stress, anxiety, and pain perception. However, an understanding of the underlying perceptual mechanisms has been lacking. In this article, neuroscience studies that use visual images to induce fear, anxiety, or pain are reviewed to gain an understanding of how the brain processes visual images in this context and to explore whether this processing can be linked to specific visual characteristics. The amygdala was identified as one of the key regions of the brain involved in the processing of fear, anxiety, and pain (induced by visual images). Other key areas included the thalamus, insula, and hippocampus. Characteristics of visual images such as the emotional dimension (valence/arousal), subject matter (familiarity, ambiguity, novelty, realism, and facial expressions), and form (sharp and curved contours) were identified as key factors influencing emotional processing. The broad structural properties of an image and overall content were found to have a more pivotal role in the emotional response than the specific details of an image. Insights on specific visual properties were translated to recommendations for what should be incorporated-and avoided-in healthcare environments.

  4. Left-Lateralized Contributions of Saccades to Cortical Activity During a One-Back Word Recognition Task.

    PubMed

    Chang, Yu-Cherng C; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N; Hämäläinen, Matti S; Temereanca, Simona

    2018-01-01

    Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.

  5. Left-Lateralized Contributions of Saccades to Cortical Activity During a One-Back Word Recognition Task

    PubMed Central

    Chang, Yu-Cherng C.; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N.; Hämäläinen, Matti S.; Temereanca, Simona

    2018-01-01

    Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150–350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition. PMID:29867372

  6. The Theory-based Influence of Map Features on Risk Beliefs: Self-reports of What is Seen and Understood for Maps Depicting an Environmental Health Hazard

    PubMed Central

    Vatovec, Christine

    2013-01-01

    Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. We report results from thirteen cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed three formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (pre-attentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared to abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: pre-attentive “incremental risk” meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals. PMID:22715919

  7. The theory-based influence of map features on risk beliefs: self-reports of what is seen and understood for maps depicting an environmental health hazard.

    PubMed

    Severtson, Dolores J; Vatovec, Christine

    2012-08-01

    Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. The authors report results from 13 cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed 3 formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (preattentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared with abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: preattentive "incremental risk" meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals.

  8. The Influence of Phonetic Dimensions on Aphasic Speech Perception

    ERIC Educational Resources Information Center

    Hessler, Dorte; Jonkers, Roel; Bastiaanse, Roelien

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with "audiovisual", "auditory only" and "visual only" stimulus display. Subjects had to…

  9. The Effect of Acute Sleep Deprivation on Visual Evoked Potentials in Professional Drivers

    PubMed Central

    Jackson, Melinda L.; Croft, Rodney J.; Owens, Katherine; Pierce, Robert J.; Kennedy, Gerard A.; Crewther, David; Howard, Mark E.

    2008-01-01

    Study Objectives: Previous studies have demonstrated that as little as 18 hours of sleep deprivation can cause deleterious effects on performance. It has also been suggested that sleep deprivation can cause a “tunnel-vision” effect, in which attention is restricted to the center of the visual field. The current study aimed to replicate these behavioral effects and to examine the electrophysiological underpinnings of these changes. Design: Repeated-measures experimental study. Setting: University laboratory. Patients or Participants: Nineteen professional drivers (1 woman; mean age = 45.3 ± 9.1 years). Interventions: Two experimental sessions were performed; one following 27 hours of sleep deprivation and the other following a normal night of sleep, with control for circadian effects. Measurements & Results: A tunnel-vision task (central versus peripheral visual discrimination) and a standard checkerboard-viewing task were performed while 32-channel EEG was recorded. For the tunnel-vision task, sleep deprivation resulted in an overall slowing of reaction times and increased errors of omission for both peripheral and foveal stimuli (P < 0.05). These changes were related to reduced P300 amplitude (indexing cognitive processing) but not measures of early visual processing. No evidence was found for an interaction effect between sleep deprivation and visual-field position, either in terms of behavior or electrophysiological responses. Slower processing of the sustained parvocellular visual pathway was demonstrated. Conclusions: These findings suggest that performance deficits on visual tasks during sleep deprivation are due to higher cognitive processes rather than early visual processing. Sleep deprivation may differentially impair processing of more-detailed visual information. Features of the study design (eg, visual angle, duration of sleep deprivation) may influence whether peripheral visual-field neglect occurs. Citation: Jackson ML; Croft RJ; Owens K; Pierce RJ; Kennedy GA; Crewther D; Howard ME. The effect of acute sleep deprivation on visual evoked potentials in professional drivers. SLEEP 2008;31(9):1261-1269. PMID:18788651

  10. Attention Effects During Visual Short-Term Memory Maintenance: Protection or Prioritization?

    PubMed Central

    Matsukura, Michi; Luck, Steven J.; Vecera, Shaun P.

    2007-01-01

    Interactions between visual attention and visual short-term memory (VSTM) play a central role in cognitive processing. For example, attention can assist in selectively encoding items into visual memory. Attention appears to be able to influence items already stored in visual memory as well; cues that appear long after the presentation of an array of objects can affect memory for those objects (Griffin & Nobre, 2003). In five experiments, we distinguished two possible mechanisms for the effects of cues on items currently stored in VSTM. A protection account proposes that attention protects the cued item from becoming degraded during the retention interval. By contrast, a prioritization account suggests that attention increases a cued item’s priority during the comparison process that occurs when memory is tested. The results of the experiments were consistent with the first of these possibilities, suggesting that attention can serve to protect VSTM representations while they are being maintained. PMID:18078232

  11. Predictions penetrate perception: Converging insights from brain, behaviour and disorder

    PubMed Central

    O’Callaghan, Claire; Kveraga, Kestutis; Shine, James M; Adams, Reginald B.; Bar, Moshe

    2018-01-01

    It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception. PMID:27222169

  12. Object perception is selectively slowed by a visually similar working memory load.

    PubMed

    Robinson, Alan; Manzi, Alberto; Triesch, Jochen

    2008-12-22

    The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.

  13. Neural model for processing the influence of visual orientation on visually perceived eye level (VPEL).

    PubMed

    Matin, L; Li, W

    2001-10-01

    An individual line or a combination of lines viewed in darkness has a large influence on the elevation to which an observer sets a target so that it is perceived to lie at eye level (VPEL). These influences are systematically related to the orientation of pitched-from-vertical lines on pitched plane(s) and to the lengths of the lines, as well as to the orientations of lines of 'equivalent pitch' that lie on frontoparallel planes. A three-stage model processes the visual influence: The first stage parallel processes the orientations of the lines utilizing 2 classes of orientation-sensitive neural units in each hemisphere, with the two classes sensitive to opposing ranges of orientations; the signal delivered by each class is of opposite sign in the two hemispheres. The second stage generates the total visual influence from the parallel combination of inputs delivered by the 4 groups of the first stage, and a third stage combines the total visual influence from the second stage with signals from the body-referenced mechanism that contains information about the position and orientation of the eyes, head, and body. The circuit equation describing the combined influence of n separate inputs from stage 1 on the output of the stage 2 integrating neuron is derived for n stimulus lines which possess any combination of orientations and lengths; Each of the n lines is assumed to stimulate one of the groups of orientation-sensitive units in visual cortex (stage 1) whose signals converge on to a dendrite of the integrating neuron (stage 2), and to produce changes in postsynaptic membrane conductance (g(i)) and potential (V(i)) there. The net current from the n dendrites results in a voltage change (V(A)) at the initial segment of the axon of the integrating neuron. Nerve impulse frequency proportional to this voltage change signals the total visual influence on perceived elevation of the visual field. The circuit equation corresponding to the total visual influence for n equal length inducing lines is V(A)= sum V(i)/[n+(g(A)/g(S))], where the potential change due to line i, V(i), is proportional to line orientation, g(A) is the conductance at the axon's summing point, and g(S)=g(i) for each i for the equal length case; the net conductance change due to a line is proportional to the line's length. The circuit equation is interpreted as a basis for quantitative predictions from the model that can be compared to psychophysical measurements of the elevation of VPEL. The interpretation provides the predicted relation for the visual influence on VPEL, V, by n inducing lines each with length l: thus, V=a+[k(i) sum theta(i)/n+(k(2)/l)], where theta(i) is the orientation of line i, a is the effect of the body-referenced mechanism, and k(1) and k(2) are constants. The model's output is fitted to the results of five sets of experiments in which the elevation of VPEL measured with a small target in the median plane is systematically influenced by distantly located 1-line or 2-line inducing stimuli varying in orientation and length and viewed in otherwise total darkness with gaze restricted to the median plane; each line is located at either 25 degrees eccentricity to the left or right of the median plane. The model predicts the negatively accelerated growth of VPEL with line length for each orientation and the change of slope constant of the linear combination rule among lines from 1.00 (linear summation; short lines) to 0.61 (near-averaging; long lines). Fits to the data are obtained over a range of orientations from -30 degrees to +30 degrees of pitch for 1-line visual fields from lengths of 3 degrees to 64 degrees, for parallel 2-line visual fields over the same range of lengths and orientations, for short and long 2-line combinations in which each of the two members may have any orientation (parallel or nonparallel pairs), and for the well-illuminated and fully structured pitchroom. In addition, similar experiments with 2-line stimuli of equivalent pitch in the frontoparallel plane were also fitted to the model. The model accounts for more than 98% of the variance of the results in each case.

  14. How Do Adolescents Process Advertisements? The Influence of Ad Characteristics, Processing Objective, and Gender.

    PubMed

    Edens; McCormick

    2000-10-01

    This study investigates the influences of print advertisements on the affective and cognitive responses of adolescents. Junior and senior high school males (n = 111) and females (n = 84) were randomly assigned to either a low- or high-elaboration condition to process primarily visual and primarily verbal print advertisements. The students then responded to questions measuring three dependent variables-memory of specific facts, inference, and emotional response. Three-way ANOVA results indicated that predominantly visual advertisements elicited memory of more facts, more inferencing, and more intense emotional responses than predominantly verbal ads. In addition, females remembered more facts, made more inferences, reported stronger emotional responses, and detected the explicit claim of the ad more frequently than males. Finally, students in the high-elaboration condition remembered more details than students in the low-elaboration condition. The results are discussed in terms of implications for advertising media literacy. Copyright 2000 Academic Press.

  15. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    PubMed Central

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  16. When art moves the eyes: a behavioral and eye-tracking study.

    PubMed

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.

  17. Using Social Network Graphs as Visualization Tools to Influence Peer Selection Decision-Making Strategies to Access Information about Complex Socioscientific Issues

    ERIC Educational Resources Information Center

    Yoon, Susan A.

    2011-01-01

    This study extends previous research that explores how visualization affordances that computational tools provide and social network analyses that account for individual- and group-level dynamic processes can work in conjunction to improve learning outcomes. The study's main hypothesis is that when social network graphs are used in instruction,…

  18. From Spoke to Hub: Transforming Organizational Vision and Strategy With Story and Visual Art

    ERIC Educational Resources Information Center

    Tyler, Jo A.

    2015-01-01

    This article reports on a case study at an inner-city nonprofit service agency that inquired into the ways integration of storytelling and visual art as a method of adult learning and way of knowing might influence the process of strategic visioning and planning in a nonprofit organization. The case study focuses on data collected through…

  19. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    PubMed

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  20. The Generation and Maintenance of Visual Mental Images: Evidence from Image Type and Aging

    ERIC Educational Resources Information Center

    De Beni, Rossana; Pazzaglia, Francesca; Gardini, Simona

    2007-01-01

    Imagery is a multi-componential process involving different mental operations. This paper addresses whether separate processes underlie the generation, maintenance and transformation of mental images or whether these cognitive processes rely on the same mental functions. We also examine the influence of age on these mental operations for…

  1. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    PubMed

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. The flanker compatibility effect as a function of visual angle, attentional focus, visual transients, and perceptual load: a search for boundary conditions.

    PubMed

    Miller, J

    1991-03-01

    When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

  3. Influence of cognitive style and interstimulus interval on the hemispheric processing of tactile stimuli.

    PubMed

    Minagawa, N; Kashu, K

    1989-06-01

    16 adult subjects performed a tactile recognition task. According to our 1984 study, half of the subjects were classified as having a left hemispheric preference for the processing of visual stimuli, while the other half were classified as having a right hemispheric preference for the processing of visual stimuli. The present task was conducted according to the S1-S2 matching paradigm. The standard stimulus was a readily recognizable object and was presented tactually to either the left or right hand of each subject. The comparison stimulus was an object-picture and was presented visually by slide in a tachistoscope. The interstimulus interval was .05 sec. or 2.5 sec. Analysis indicated that the left-preference group showed right-hand superiority, and the right-preference group showed left-hand superiority. The notion of individual hemisphericity was supported in tactile processing.

  4. Global and local processing near the left and right hands

    PubMed Central

    Langerak, Robin M.; La Mantia, Carina L.; Brown, Liana E.

    2013-01-01

    Visual targets can be processed more quickly and reliably when a hand is placed near the target. Both unimodal and bimodal representations of hands are largely lateralized to the contralateral hemisphere, and since each hemisphere demonstrates specialized cognitive processing, it is possible that targets appearing near the left hand may be processed differently than targets appearing near the right hand. The purpose of this study was to determine whether visual processing near the left and right hands interacts with hemispheric specialization. We presented hierarchical-letter stimuli (e.g., small characters used as local elements to compose large characters at the global level) near the left or right hands separately and instructed participants to discriminate the presence of target letters (X and O) from non-target letters (T and U) at either the global or local levels as quickly as possible. Targets appeared at either the global or local level of the display, at both levels, or were absent from the display; participants made foot-press responses. When discriminating target presence at the global level, participants responded more quickly to stimuli presented near the left hand than near either the right hand or in the no-hand condition. Hand presence did not influence target discrimination at the local level. Our interpretation is that left-hand presence may help participants discriminate global information, a right hemisphere (RH) process, and that the left hand may influence visual processing in a way that is distinct from the right hand. PMID:24194725

  5. Dissociating emotion-induced blindness and hypervision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-12-01

    Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.

  6. [Physiological basis of a possible increase in the efficacy of the photo- and magnetotherapy of the visual nerve upon partial atrophy and ischemia].

    PubMed

    Shlygin, V V; Tiuliaev, A P; Ioĭleva, E E; Maksimov, G V

    2004-01-01

    An approach to the choice of the parameters of physiotherapeutic and biophysical influence on the visual nerve was proposed. The approach is based on parallel photo- and magnetostimulation of excitable fibers in which the morphological and electrophysiological properties of fibers and some parameters of the pathological processes associated with partial artophy and ischemia are taken into account. A method for correlating the photostimulation by light flashes (intensity 65 mWt at emission wavelength 660 nm) of a portion of the retina with the choice of the parameters of magnetic influence (amplitude 73 mT, duration of the wave front of 40 ms, and frequency of pulse sequence of about 1 Hz) on the visual nerve was developed.

  7. The relationship between visual word and face processing lateralization in the fusiform gyri: A cross-sectional study.

    PubMed

    Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S

    2016-08-01

    Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Mental rotation impairs attention shifting and short-term memory encoding: neurophysiological evidence against the response-selection bottleneck model of dual-task performance.

    PubMed

    Pannebakker, Merel M; Jolicœur, Pierre; van Dam, Wessel O; Band, Guido P H; Ridderinkhof, K Richard; Hommel, Bernhard

    2011-09-01

    Dual tasks and their associated delays have often been used to examine the boundaries of processing in the brain. We used the dual-task procedure and recorded event-related potentials (ERPs) to investigate how mental rotation of a first stimulus (S1) influences the shifting of visual-spatial attention to a second stimulus (S2). Visual-spatial attention was monitored by using the N2pc component of the ERP. In addition, we examined the sustained posterior contralateral negativity (SPCN) believed to index the retention of information in visual short-term memory. We found modulations of both the N2pc and the SPCN, suggesting that engaging mechanisms of mental rotation impairs the deployment of visual-spatial attention and delays the passage of a representation of S2 into visual short-term memory. Both results suggest interactions between mental rotation and visual-spatial attention in capacity-limited processing mechanisms indicating that response selection is not pivotal in dual-task delays and all three processes are likely to share a common resource like executive control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. The visual analysis of emotional actions.

    PubMed

    Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie

    2006-01-01

    Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.

  10. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection During Visual Search?

    PubMed Central

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing. PMID:17469973

  11. Do the contents of visual working memory automatically influence attentional selection during visual search?

    PubMed

    Woodman, Geoffrey F; Luck, Steven J

    2007-04-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.

  12. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Eye-Tracking Measures Reveal How Changes in the Design of Aided AAC Displays Influence the Efficiency of Locating Symbols by School-Age Children without Disabilities

    ERIC Educational Resources Information Center

    Wilkinson, Krista M.; O'Neill, Tara; McIlvane, William J.

    2014-01-01

    Purpose: Many individuals with communication impairments use aided augmentative and alternative communication (AAC) systems involving letters, words, or line drawings that rely on the visual modality. It seems reasonable to suggest that display design should incorporate information about how users attend to and process visual information. The…

  14. Area 18 of the cat: the first step in processing visual movement information.

    PubMed

    Orban, G A

    1977-01-01

    In cats, responses of area 18 neurons to different moving patterns were measured. The influence of three movement parameters--direction, angular velocity, and amplitude of movement--were tested. The results indicate that in area 18 no ideal movement detector exists, but that simple and complex cells each perform complementary operations of primary visual areas, i.e. analysis and detection of movement.

  15. Neurofeedback training of gamma band oscillations improves perceptual processing.

    PubMed

    Salari, Neda; Büchel, Christian; Rose, Michael

    2014-10-01

    In this study, a noninvasive electroencephalography-based neurofeedback method is applied to train volunteers to deliberately increase gamma band oscillations (40 Hz) in the visual cortex. Gamma band oscillations in the visual cortex play a functional role in perceptual processing. In a previous study, we were able to demonstrate that gamma band oscillations prior to stimulus presentation have a significant influence on perceptual processing of visual stimuli. In the present study, we aimed to investigate longer lasting effects of gamma band neurofeedback training on perceptual processing. For this purpose, a feedback group was trained to modulate oscillations in the gamma band, while a control group participated in a task with an identical design setting but without gamma band feedback. Before and after training, both groups participated in a perceptual object detection task and a spatial attention task. Our results clearly revealed that only the feedback group but not the control group exhibited a visual processing advantage and an increase in oscillatory gamma band activity in the pre-stimulus period of the processing of the visual object stimuli after the neurofeedback training. Results of the spatial attention task showed no difference between the groups, which underlines the specific role of gamma band oscillations for perceptual processing. In summary, our results show that modulation of gamma band activity selectively affects perceptual processing and therefore supports the relevant role of gamma band activity for this specific process. Furthermore, our results demonstrate the eligibility of gamma band oscillations as a valuable tool for neurofeedback applications.

  16. Neural processing of visual information under interocular suppression: a critical review

    PubMed Central

    Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido

    2014-01-01

    When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469

  17. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres.

    PubMed

    Ince, Robin A A; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J; Rousselet, Guillaume A; Schyns, Philippe G

    2016-08-22

    A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. © The Author 2016. Published by Oxford University Press.

  18. Short-Term Memory for Space and Time Flexibly Recruit Complementary Sensory-Biased Frontal Lobe Attention Networks.

    PubMed

    Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C

    2015-08-19

    The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Top-down regulation of default mode activity in spatial visual attention

    PubMed Central

    Wen, Xiaotong; Liu, Yijun; Yao, Li; Ding, Mingzhou

    2013-01-01

    Dorsal anterior cingulate and bilateral anterior insula form a task control network (TCN) whose primary function includes initiating and maintaining task-level cognitive set and exerting top-down regulation of sensorimotor processing. The default mode network (DMN), comprising an anatomically distinct set of cortical areas, mediates introspection and self-referential processes. Resting-state data show that TCN and DMN interact. The functional ramifications of their interaction remain elusive. Recording fMRI data from human subjects performing a visual spatial attention task and correlating Granger causal influences with behavioral performance and blood-oxygen-level-dependent (BOLD) activity we report three main findings. First, causal influences from TCN to DMN, i.e., TCN→DMN, are positively correlated with behavioral performance. Second, causal influences from DMN to TCN, i.e., DMN→TCN, are negatively correlated with behavioral performance. Third, stronger DMN→TCN are associated with less elevated BOLD activity in TCN, whereas the relationship between TCN→DMN and DMN BOLD activity is unsystematic. These results suggest that during visual spatial attention, top-down signals from TCN to DMN regulate the activity in DMN to enhance behavioral performance, whereas signals from DMN to TCN, acting possibly as internal noise, interfere with task control, leading to degraded behavioral performance. PMID:23575842

  20. Differential effect of visual motion adaption upon visual cortical excitability.

    PubMed

    Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer

    2017-03-01

    The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.

  1. Sleep inertia, sleep homeostatic, and circadian influences on higher-order cognitive functions

    PubMed Central

    Ronda, Joseph M.; Czeisler, Charles A.; Wright, Kenneth P.

    2016-01-01

    Summary Sleep inertia, sleep homeostatic, and circadian processes modulate cognition, including reaction time, memory, mood, and alertness. How these processes influence higher-order cognitive functions is not well known. Six participants completed a 73-daylong study that included two 14-daylong 28h forced desynchrony protocols, to examine separate and interacting influences of sleep inertia, sleep homeostasis, and circadian phase on higher-order cognitive functions of inhibitory control and selective visual attention. Cognitive performance for most measures was impaired immediately after scheduled awakening and improved over the first ~2-4h of wakefulness (sleep inertia); worsened thereafter until scheduled bedtime (sleep homeostasis); and was worst at ~60° and best at ~240° (circadian modulation, with worst and best phases corresponding to ~9AM and ~9PM respectively, in individuals with a habitual waketime of 7AM). The relative influences of sleep inertia, sleep homeostasis, and circadian phase depended on the specific higher-order cognitive function task examined. Inhibitory control appeared to be modulated most strongly by circadian phase, whereas selective visual attention for a spatial-configuration search task was modulated most strongly by sleep inertia. These findings demonstrate that some higher-order cognitive processes are differentially sensitive to different sleep-wake regulatory processes. Differential modulation of cognitive functions by different sleep-wake regulatory processes has important implications for understanding mechanisms contributing to performance impairments during adverse circadian phases, sleep deprivation, and/or upon awakening from sleep. PMID:25773686

  2. Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli

    PubMed Central

    Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.

    2013-01-01

    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939

  3. Early Visual Deprivation Alters Multisensory Processing in Peripersonal Space

    ERIC Educational Resources Information Center

    Collignon, Olivier; Charbonneau, Genevieve; Lassonde, Maryse; Lepore, Franco

    2009-01-01

    Multisensory peripersonal space develops in a maturational process that is thought to be influenced by early sensory experience. We investigated the role of vision in the effective development of audiotactile interactions in peripersonal space. Early blind (EB), late blind (LB) and sighted control (SC) participants were asked to lateralize…

  4. Investigating the role of visual and auditory search in reading and developmental dyslexia

    PubMed Central

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014

  5. Investigating the role of visual and auditory search in reading and developmental dyslexia.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  6. Study on tip leakage vortex cavitating flows using a visualization method

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Jiang, Yutong; Cao, Xiaolong; Wang, Guoyu

    2018-01-01

    Experimental investigations of unsteady cavitating flows in a hydrofoil tip leakage region with different gap sizes are conducted to highlight the development of gap cavitation. The experiments were taken in a closed cavitation tunnel, during which high-speed camera had been used to capture the cavitation patterns. A new visualization method based on image processing was developed to capture time-dependent cavitation patterns. The results show that the visualization method can effectively capture the cavitation patterns in the tip region, including both the attached cavity in the gap and the tip leakage vortex (TLV) cavity near the trailing edge. Moreover, with the decrease of cavitation number, the TLV cavity develops from a rapid onset-growth-collapse process to a continuous process, and extends both upstream and downstream. The attached cavity in the gap develops gradually stretching beyond the gap and combines with the vortex cavity to form the triangle cavitating region. Furthermore, the influences of gap size on the cavitation are also discussed. The gap size has a great influence on the loss across the gap, and hence the locations of the inception attached cavity. Besides, inception locations and extending direction of the TLV cavity with different gap sizes also differ. The TLV in the case with τ = 0.061 is more likely to be jet-like compared with that in the case with τ = 0.024, and the gap size has a great influence on the TLV strength.

  7. The order of information processing alters economic gain-loss framing effects.

    PubMed

    Kwak, Youngbin; Huettel, Scott

    2018-01-01

    Adaptive decision making requires analysis of available information during the process of choice. In many decisions that information is presented visually - which means that variations in visual properties (e.g., salience, complexity) can potentially influence the process of choice. In the current study, we demonstrate that variation in the left-right positioning of risky and safe decision options can influence the canonical gain-loss framing effect. Two experiments were conducted using an economic framing task in which participants chose between gambles and certain outcomes. The first experiment demonstrated that the magnitude of the gain-loss framing effect was greater when the certain option signaling the current frame was presented on the left side of the visual display. Eye-tracking data during task performance showed a left-gaze bias for initial fixations, suggesting that the option presented on the left side was processed first. Combination of eye-tracking and choice data revealed that there was a significant effect of direction of first gaze (i.e. left vs. right) as well as an interaction between gaze direction and identity of the first fixated information (i.e. certain vs. gamble) regardless of frame. A second experiment presented the gamble and certain options in a random order, with a temporal delay between their presentations. We found that the magnitude of gain-loss framing was larger when the certain option was presented first, regardless of left and right positioning, only in individuals with lower risk-taking tendencies. The effect of presentation order on framing was not present in high risk-takers. These results suggest that the sequence of visual information processing as well as their left-right positioning can bias choices by changing the impact of the presented information during risky decision making. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Genetic architecture of the Delis-Kaplan Executive Function System Trail Making Test: evidence for distinct genetic influences on executive function.

    PubMed

    Vasilopoulos, Terrie; Franz, Carol E; Panizzon, Matthew S; Xian, Hong; Grant, Michael D; Lyons, Michael J; Toomey, Rosemary; Jacobson, Kristen C; Kremen, William S

    2012-03-01

    To examine how genes and environments contribute to relationships among Trail Making Test (TMT) conditions and the extent to which these conditions have unique genetic and environmental influences. Participants included 1,237 middle-aged male twins from the Vietnam Era Twin Study of Aging. The Delis-Kaplan Executive Function System TMT included visual searching, number and letter sequencing, and set-shifting components. Phenotypic correlations among TMT conditions ranged from 0.29 to 0.60, and genes accounted for the majority (58-84%) of each correlation. Overall heritability ranged from 0.34 to 0.62 across conditions. Phenotypic factor analysis suggested a single factor. In contrast, genetic models revealed a single common genetic factor but also unique genetic influences separate from the common factor. Genetic variance (i.e., heritability) of number and letter sequencing was completely explained by the common genetic factor while unique genetic influences separate from the common factor accounted for 57% and 21% of the heritabilities of visual search and set shifting, respectively. After accounting for general cognitive ability, unique genetic influences accounted for 64% and 31% of those heritabilities. A common genetic factor, most likely representing a combination of speed and sequencing, accounted for most of the correlation among TMT 1-4. Distinct genetic factors, however, accounted for a portion of variance in visual scanning and set shifting. Thus, although traditional phenotypic shared variance analysis techniques suggest only one general factor underlying different neuropsychological functions in nonpatient populations, examining the genetic underpinnings of cognitive processes with twin analysis can uncover more complex etiological processes.

  9. Top-down and bottom-up competition in visual stimuli processing.

    PubMed

    Ligeza, Tomasz S; Tymorek, Agnieszka D; Wyczesany, Mirosław

    2017-01-01

    Limited attention capacity results that not all the stimuli present in the visual field are equally processed. While processing of salient stimuli is automatically boosted by bottom‑up attention, processing of task‑relevant stimuli can be boosted volitionally by top‑down attention. Usually, both top‑down and bottom‑up influences are present simultaneously, which creates a competition between these two types of attention. We examined this competition using both behavioral and electrophysiological measures. Participants responded to letters superimposed on background pictures. We assumed that responding to different conditions of the letter task engages top‑down attention to different extent, whereas processing of background pictures of varying salience engages bottom‑up attention to different extent. To check how manipulation of top‑down attention influences bottom‑up processing, we measured evoked response potentials (ERPs) in response to pictures (engaging mostly bottom‑up attention) during three conditions of a letter task (different levels of top‑down engagement). Conversely, to check how manipulation of bottom‑up attention influences top‑down processing, we measured ERP responses for letters (engaging mostly top‑down attention) while manipulating the salience of background pictures (different levels of bottom‑up engagement). The correctness and reaction times in response to letters were also analyzed. As expected, most of the ERPs and behavioral measures revealed a trade‑off between both types of processing: a decrease of bottom‑up processing was associated with an increase of top‑down processing and, similarly, a decrease of top‑down processing was associated with an increase in bottom‑up processing. Results proved competition between the two types of attentions.

  10. Effective connectivity in the neural network underlying coarse-to-fine categorization of visual scenes. A dynamic causal modeling study.

    PubMed

    Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole

    2015-10-01

    According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Top-Down Control of Visual Alpha Oscillations: Sources of Control Signals and Their Mechanisms of Action

    PubMed Central

    Wang, Chao; Rajagovindan, Rajasimhan; Han, Sahng-Min; Ding, Mingzhou

    2016-01-01

    Alpha oscillations (8–12 Hz) are thought to inversely correlate with cortical excitability. Goal-oriented modulation of alpha has been studied extensively. In visual spatial attention, alpha over the region of visual cortex corresponding to the attended location decreases, signifying increased excitability to facilitate the processing of impending stimuli. In contrast, in retention of verbal working memory, alpha over visual cortex increases, signifying decreased excitability to gate out stimulus input to protect the information held online from sensory interference. According to the prevailing model, this goal-oriented biasing of sensory cortex is effected by top-down control signals from frontal and parietal cortices. The present study tests and substantiates this hypothesis by (a) identifying the signals that mediate the top-down biasing influence, (b) examining whether the cortical areas issuing these signals are task-specific or task-independent, and (c) establishing the possible mechanism of the biasing action. High-density human EEG data were recorded in two experimental paradigms: a trial-by-trial cued visual spatial attention task and a modified Sternberg working memory task. Applying Granger causality to both sensor-level and source-level data we report the following findings. In covert visual spatial attention, the regions exerting top-down control over visual activity are lateralized to the right hemisphere, with the dipoles located at the right frontal eye field (FEF) and the right inferior frontal gyrus (IFG) being the main sources of top-down influences. During retention of verbal working memory, the regions exerting top-down control over visual activity are lateralized to the left hemisphere, with the dipoles located at the left middle frontal gyrus (MFG) being the main source of top-down influences. In both experiments, top-down influences are mediated by alpha oscillations, and the biasing effect is likely achieved via an inhibition-disinhibition mechanism. PMID:26834601

  12. Perceptual Grouping in Haptic Search: The Influence of Proximity, Similarity, and Good Continuation

    ERIC Educational Resources Information Center

    Overvliet, Krista E.; Krampe, Ralf Th.; Wagemans, Johan

    2012-01-01

    We conducted a haptic search experiment to investigate the influence of the Gestalt principles of proximity, similarity, and good continuation. We expected faster search when the distractors could be grouped. We chose edges at different orientations as stimuli because they are processed similarly in the haptic and visual modality. We therefore…

  13. Setting the Tone: An ERP Investigation of the Influences of Phonological Similarity on Spoken Word Recognition in Mandarin Chinese

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2012-01-01

    We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…

  14. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.

    PubMed

    Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias

    2017-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.

  15. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye

    PubMed Central

    Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias

    2018-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412

  16. Cortical activation during Braille reading is influenced by early visual experience in subjects with severe visual disability: a correlational fMRI study.

    PubMed

    Melzer, P; Morgan, V L; Pickens, D R; Price, R R; Wall, R S; Ebner, F F

    2001-11-01

    Functional magnetic resonance imaging was performed on blind adults resting and reading Braille. The strongest activation was found in primary somatic sensory/motor cortex on both cortical hemispheres. Additional foci of activation were situated in the parietal, temporal, and occipital lobes where visual information is processed in sighted persons. The regions were differentiated most in the correlation of their time courses of activation with resting and reading. Differences in magnitude and expanse of activation were substantially less significant. Among the traditionally visual areas, the strength of correlation was greatest in posterior parietal cortex and moderate in occipitotemporal, lateral occipital, and primary visual cortex. It was low in secondary visual cortex as well as in dorsal and ventral inferior temporal cortex and posterior middle temporal cortex. Visual experience increased the strength of correlation in all regions except dorsal inferior temporal and posterior parietal cortex. The greatest statistically significant increase, i.e., approximately 30%, was in ventral inferior temporal and posterior middle temporal cortex. In these regions, words are analyzed semantically, which may be facilitated by visual experience. In contrast, visual experience resulted in a slight, insignificant diminution of the strength of correlation in dorsal inferior temporal cortex where language is analyzed phonetically. These findings affirm that posterior temporal regions are engaged in the processing of written language. Moreover, they suggest that this function is modified by early visual experience. Furthermore, visual experience significantly strengthened the correlation of activation and Braille reading in occipital regions traditionally involved in the processing of visual features and object recognition suggesting a role for visual imagery. Copyright 2001 Wiley-Liss, Inc.

  17. Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

    PubMed

    Alm, Magnus; Behne, Dawn

    2013-10-01

    Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.

  18. Selective transfer of visual working memory training on Chinese character learning.

    PubMed

    Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel

    2014-01-01

    Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and imagery processes with complex visual stimuli that fosters the coherent synthesis of a percept from a complex visual input in service of enhanced Chinese character learning. © 2013 Published by Elsevier Ltd.

  19. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  20. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    PubMed

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  1. Visual attention shifting in autism spectrum disorders.

    PubMed

    Richard, Annette E; Lajiness-O'Neill, Renee

    2015-01-01

    Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.

  2. The path to memory is guided by strategy: distinct networks are engaged in associative encoding under visual and verbal strategy and influence memory performance in healthy and impaired individuals

    PubMed Central

    Hales, J. B.; Brewer, J. B.

    2018-01-01

    Given the diversity of stimuli encountered in daily life, a variety of strategies must be used for learning new information. Relating and encoding visual and verbal stimuli into memory has been probed using various tasks and stimulus-types. Engagement of specific subsequent memory and cortical processing regions depends on the stimulus modality of studied material; however, it remains unclear whether different encoding strategies similarly influence regional activity when stimulus-type is held constant. In this study, subjects encoded object pairs using a visual or verbal associative strategy during functional magnetic resonance imaging (fMRI), and subsequent memory was assessed for pairs encoded under each strategy. Each strategy elicited distinct regional processing and subsequent memory effects: middle / superior frontal, lateral parietal, and lateral occipital for visually-associated pairs and inferior frontal, medial frontal, and medial occipital for verbally-associated pairs. This regional selectivity mimics the effects of stimulus modality, suggesting that cortical involvement in associative encoding is driven by strategy, and not simply by stimulus-type. The clinical relevance of these findings, probed in two patients with recent aphasic strokes, suggest that training with strategies utilizing unaffected cortical regions might improve memory ability in patients with brain damage. PMID:22390467

  3. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  4. ICT integration in mathematics initial teacher training and its impact on visualization: the case of GeoGebra

    NASA Astrophysics Data System (ADS)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions about teaching and learning mathematics. This paper describes how GeoGebra-based dynamic applets - designed and used in an exploratory manner - promote mathematical processes such as conjectures. It also refers to the changes prospective teachers experience regarding the relevance visual dynamic representations acquire in teaching mathematics. This study observes a shift in school routines when incorporating technology into the mathematics classroom. Visualization appears as a basic competence associated to key mathematical processes. Implications of an early integration of ICT in mathematics initial teacher training and its impact on developing technological pedagogical content knowledge (TPCK) are drawn.

  5. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  6. Face to face with emotion: holistic face processing is modulated by emotional state.

    PubMed

    Curby, Kim M; Johnson, Kareem J; Tyson, Alyssa

    2012-01-01

    Negative emotions are linked with a local, rather than global, visual processing style, which may preferentially facilitate feature-based, relative to holistic, processing mechanisms. Because faces are typically processed holistically, and because social contexts are prime elicitors of emotions, we examined whether negative emotions decrease holistic processing of faces. We induced positive, negative, or neutral emotions via film clips and measured holistic processing before and after the induction: participants made judgements about cued parts of chimeric faces, and holistic processing was indexed by the interference caused by task-irrelevant face parts. Emotional state significantly modulated face-processing style, with the negative emotion induction leading to decreased holistic processing. Furthermore, self-reported change in emotional state correlated with changes in holistic processing. These results contrast with general assumptions that holistic processing of faces is automatic and immune to outside influences, and they illustrate emotion's power to modulate socially relevant aspects of visual perception.

  7. Evidence of different underlying processes in pattern recall and decision-making.

    PubMed

    Gorman, Adam D; Abernethy, Bruce; Farrow, Damian

    2015-01-01

    The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.

  8. Accurate expectancies diminish perceptual distraction during visual search

    PubMed Central

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  9. Sustained Splits of Attention within versus across Visual Hemifields Produce Distinct Spatial Gain Profiles.

    PubMed

    Walter, Sabrina; Keitel, Christian; Müller, Matthias M

    2016-01-01

    Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.

  10. Differential effect of glucose ingestion on the neural processing of food stimuli in lean and overweight adults.

    PubMed

    Heni, Martin; Kullmann, Stephanie; Ketterer, Caroline; Guthoff, Martina; Bayer, Margarete; Staiger, Harald; Machicao, Fausto; Häring, Hans-Ulrich; Preissl, Hubert; Veit, Ralf; Fritsche, Andreas

    2014-03-01

    Eating behavior is crucial in the development of obesity and Type 2 diabetes. To further investigate its regulation, we studied the effects of glucose versus water ingestion on the neural processing of visual high and low caloric food cues in 12 lean and 12 overweight subjects by functional magnetic resonance imaging. We found body weight to substantially impact the brain's response to visual food cues after glucose versus water ingestion. Specifically, there was a significant interaction between body weight, condition (water versus glucose), and caloric content of food cues. Although overweight subjects showed a generalized reduced response to food objects in the fusiform gyrus and precuneus, the lean group showed a differential pattern to high versus low caloric foods depending on glucose versus water ingestion. Furthermore, we observed plasma insulin and glucose associated effects. The hypothalamic response to high caloric food cues negatively correlated with changes in blood glucose 30 min after glucose ingestion, while especially brain regions in the prefrontal cortex showed a significant negative relationship with increases in plasma insulin 120 min after glucose ingestion. We conclude that the postprandial neural processing of food cues is highly influenced by body weight especially in visual areas, potentially altering visual attention to food. Furthermore, our results underline that insulin markedly influences prefrontal activity to high caloric food cues after a meal, indicating that postprandial hormones may be potential players in modulating executive control. Copyright © 2013 Wiley Periodicals, Inc.

  11. The role of rotational hand movements and general motor ability in children’s mental rotation performance

    PubMed Central

    Jansen, Petra; Kellner, Jan

    2015-01-01

    Mental rotation of visual images of body parts and abstract shapes can be influenced by simultaneous motor activity. Children in particular have a strong coupling between motor and cognitive processes. We investigated the influence of a rotational hand movement performed by rotating a knob on mental rotation performance in primary school-age children (N = 83; age range: 7.0–8.3 and 9.0–10.11 years). In addition, we assessed the role of motor ability in this relationship. Boys in the 7- to 8-year-old group were faster when mentally and manually rotating in the same direction than in the opposite direction. For girls and older children this effect was not found. A positive relationship was found between motor ability and accuracy on the mental rotation task: stronger motor ability related to improved mental rotation performance. In both age groups, children with more advanced motor abilities were more likely to adopt motor processes to solve mental rotation tasks if the mental rotation task was primed by a motor task. Our evidence supports the idea that an overlap between motor and visual cognitive processes in children is influenced by motor ability. PMID:26236262

  12. The role of rotational hand movements and general motor ability in children's mental rotation performance.

    PubMed

    Jansen, Petra; Kellner, Jan

    2015-01-01

    Mental rotation of visual images of body parts and abstract shapes can be influenced by simultaneous motor activity. Children in particular have a strong coupling between motor and cognitive processes. We investigated the influence of a rotational hand movement performed by rotating a knob on mental rotation performance in primary school-age children (N = 83; age range: 7.0-8.3 and 9.0-10.11 years). In addition, we assessed the role of motor ability in this relationship. Boys in the 7- to 8-year-old group were faster when mentally and manually rotating in the same direction than in the opposite direction. For girls and older children this effect was not found. A positive relationship was found between motor ability and accuracy on the mental rotation task: stronger motor ability related to improved mental rotation performance. In both age groups, children with more advanced motor abilities were more likely to adopt motor processes to solve mental rotation tasks if the mental rotation task was primed by a motor task. Our evidence supports the idea that an overlap between motor and visual cognitive processes in children is influenced by motor ability.

  13. Does a Sensory Processing Deficit Explain Counting Accuracy on Rapid Visual Sequencing Tasks in Adults with and without Dyslexia?

    ERIC Educational Resources Information Center

    Conlon, Elizabeth G.; Wright, Craig M.; Norris, Karla; Chekaluk, Eugene

    2011-01-01

    The experiments conducted aimed to investigate whether reduced accuracy when counting stimuli presented in rapid temporal sequence in adults with dyslexia could be explained by a sensory processing deficit, a general slowing in processing speed or difficulties shifting attention between stimuli. To achieve these aims, the influence of the…

  14. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment

    PubMed Central

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454

  15. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    PubMed

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  16. Top-down visual search in Wimmelbild

    NASA Astrophysics Data System (ADS)

    Bergbauer, Julia; Tari, Sibel

    2013-03-01

    Wimmelbild which means "teeming figure picture" is a popular genre of visual puzzles. Abundant masses of small figures are brought together in complex arrangements to make one scene in a Wimmelbild. It is picture hunt game. We discuss what type of computations/processes could possibly underlie the solution of the discovery of figures that are hidden due to a distractive influence of the context. One thing for sure is that the processes are unlikely to be purely bottom-up. One possibility is to re-arrange parts and see what happens. As this idea is linked to creativity, there are abundant examples of unconventional part re-organization in modern art. A second possibility is to define what to look for. That is to formulate the search as a top-down process. We address top-down visual search in Wimmelbild with the help of diffuse distance and curvature coding fields.

  17. The Tonal Function of a Task-Irrelevant Chord Modulates Speed of Visual Processing

    ERIC Educational Resources Information Center

    Escoffier, N.; Tillmann, B.

    2008-01-01

    Harmonic priming studies have provided evidence that musical expectations influence sung phoneme monitoring, with facilitated processing for phonemes sung on tonally related (expected) chords in comparison to less-related (less-expected) chords [Bigand, Tillmann, Poulin, D'Adamo, and Madurell (2001). "The effect of harmonic context on phoneme…

  18. Gamification in Science Education: Gamifying Learning of Microscopic Processes in the Laboratory

    ERIC Educational Resources Information Center

    Fleischmann, Katja; Ariel, Ellen

    2016-01-01

    Understanding and trouble-shooting microscopic processes involved in laboratory tests are often challenging for students in science education because of the inability to visualize the different steps and the various errors that may influence test outcome. The effectiveness of gamification or the use of game design elements and game-mechanics were…

  19. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    ERIC Educational Resources Information Center

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  20. "Pushing the Button While Pushing the Argument": Motor Priming of Abstract Action Language

    ERIC Educational Resources Information Center

    Schaller, Franziska; Weiss, Sabine; Müller, Horst M.

    2017-01-01

    In a behavioral study we analyzed the influence of visual action primes on abstract action sentence processing. We thereby aimed at investigating mental motor involvement during processes of meaning constitution of action verbs in abstract contexts. In the first experiment, participants executed either congruous or incongruous movements parallel…

  1. A magnetoencephalography study of multi-modal processing of pain anticipation in primary sensory cortices.

    PubMed

    Gopalakrishnan, R; Burgess, R C; Plow, E B; Floden, D P; Machado, A G

    2015-09-24

    Pain anticipation plays a critical role in pain chronification and results in disability due to pain avoidance. It is important to understand how different sensory modalities (auditory, visual or tactile) may influence pain anticipation as different strategies could be applied to mitigate anticipatory phenomena and chronification. In this study, using a countdown paradigm, we evaluated with magnetoencephalography the neural networks associated with pain anticipation elicited by different sensory modalities in normal volunteers. When encountered with well-established cues that signaled pain, visual and somatosensory cortices engaged the pain neuromatrix areas early during the countdown process, whereas the auditory cortex displayed delayed processing. In addition, during pain anticipation, the visual cortex displayed independent processing capabilities after learning the contextual meaning of cues from associative and limbic areas. Interestingly, cross-modal activation was also evident and strong when visual and tactile cues signaled upcoming pain. Dorsolateral prefrontal cortex and mid-cingulate cortex showed significant activity during pain anticipation regardless of modality. Our results show pain anticipation is processed with great time efficiency by a highly specialized and hierarchical network. The highest degree of higher-order processing is modulated by context (pain) rather than content (modality) and rests within the associative limbic regions, corroborating their intrinsic role in chronification. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Developmental trajectory of neural specialization for letter and number visual processing.

    PubMed

    Park, Joonkoo; van den Berg, Berry; Chiang, Crystal; Woldorff, Marty G; Brannon, Elizabeth M

    2018-05-01

    Adult neuroimaging studies have demonstrated dissociable neural activation patterns in the visual cortex in response to letters (Latin alphabet) and numbers (Arabic numerals), which suggest a strong experiential influence of reading and mathematics on the human visual system. Here, developmental trajectories in the event-related potential (ERP) patterns evoked by visual processing of letters, numbers, and false fonts were examined in four different age groups (7-, 10-, 15-year-olds, and young adults). The 15-year-olds and adults showed greater neural sensitivity to letters over numbers in the left visual cortex and the reverse pattern in the right visual cortex, extending previous findings in adults to teenagers. In marked contrast, 7- and 10-year-olds did not show this dissociable neural pattern. Furthermore, the contrast of familiar stimuli (letters or numbers) versus unfamiliar ones (false fonts) showed stark ERP differences between the younger (7- and 10-year-olds) and the older (15-year-olds and adults) participants. These results suggest that both coarse (familiar versus unfamiliar) and fine (letters versus numbers) tuning for letters and numbers continue throughout childhood and early adolescence, demonstrating a profound impact of uniquely human cultural inventions on visual cognition and its development. © 2017 John Wiley & Sons Ltd.

  3. Common Visual Preference for Curved Contours in Humans and Great Apes.

    PubMed

    Munar, Enric; Gómez-Puerto, Gerardo; Call, Josep; Nadal, Marcos

    2015-01-01

    Among the visual preferences that guide many everyday activities and decisions, from consumer choices to social judgment, preference for curved over sharp-angled contours is commonly thought to have played an adaptive role throughout human evolution, favoring the avoidance of potentially harmful objects. However, because nonhuman primates also exhibit preferences for certain visual qualities, it is conceivable that humans' preference for curved contours is grounded on perceptual and cognitive mechanisms shared with extant nonhuman primate species. Here we aimed to determine whether nonhuman great apes and humans share a visual preference for curved over sharp-angled contours using a 2-alternative forced choice experimental paradigm under comparable conditions. Our results revealed that the human group and the great ape group indeed share a common preference for curved over sharp-angled contours, but that they differ in the manner and magnitude with which this preference is expressed behaviorally. These results suggest that humans' visual preference for curved objects evolved from earlier primate species' visual preferences, and that during this process it became stronger, but also more susceptible to the influence of higher cognitive processes and preference for other visual features.

  4. No psychological effect of color context in a low level vision task

    PubMed Central

    Pedley, Adam; Wade, Alex R

    2013-01-01

    Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280

  5. No psychological effect of color context in a low level vision task.

    PubMed

    Pedley, Adam; Wade, Alex R

    2013-01-01

    A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η (2) = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas.

  6. The contents of visual working memory reduce uncertainty during visual search.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2011-05-01

    Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.

  7. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    PubMed

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  8. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    PubMed Central

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  9. Mechanisms of attention in reading parafoveal words: a cross-linguistic study in children.

    PubMed

    Siéroff, Eric; Dahmen, Riadh; Fagard, Jacqueline

    2012-05-01

    The right visual field superiority (RVFS) for words may be explained by the cerebral lateralization for language, the scanning habits in relation to script direction, and spatial attention. The present study explored the influence of spatial attention on the RVFS in relation to scanning habits in school-age children. French second- and fourth-graders identified briefly presented French parafoveal words. Tunisian second- and fourth-graders identified Arabic words, and Tunisian fourth-graders identified French words. The distribution of spatial attention was evaluated by using a distracter in the visual field opposite the word. The results of the correct identification score showed that reading direction had only a partial effect on the identification of parafoveal words and the distribution of attention, with a clear RVFS and a larger effect of the distracter in the left visual field in French children reading French words, and an absence of asymmetry when Tunisian children read Arabic words. Fourth-grade Tunisian children also showed an RVFS when reading French words without an asymmetric distribution of attention, suggesting that their native language may have partially influenced reading strategies in the newly learned language. However, the mode of letter processing, evaluated by a qualitative error score, was only influenced by reading direction, with more sequential processing in the visual field where reading "begins." The distribution of attention when reading parafoveal words is better explained by the interaction between left hemisphere activation and strategies related to reading direction. We discuss these results in light of an attentional theory that dissociates selection and preparation.

  10. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.

  11. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  12. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  13. The research on visual industrial robot which adopts fuzzy PID control algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye

    2017-03-01

    The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.

  14. Effects of Visual Feedback Distortion on Gait Adaptation: Comparison of Implicit Visual Distortion Versus Conscious Modulation on Retention of Motor Learning.

    PubMed

    Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho

    2015-09-01

    Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.

  15. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    PubMed Central

    Alm, Magnus; Behne, Dawn

    2015-01-01

    Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274

  16. Integrating conflict detection and attentional control mechanisms.

    PubMed

    Walsh, Bong J; Buonocore, Michael H; Carter, Cameron S; Mangun, George R

    2011-09-01

    Human behavior involves monitoring and adjusting performance to meet established goals. Performance-monitoring systems that act by detecting conflict in stimulus and response processing have been hypothesized to influence cortical control systems to adjust and improve performance. Here we used fMRI to investigate the neural mechanisms of conflict monitoring and resolution during voluntary spatial attention. We tested the hypothesis that the ACC would be sensitive to conflict during attentional orienting and influence activity in the frontoparietal attentional control network that selectively modulates visual information processing. We found that activity in ACC increased monotonically with increasing attentional conflict. This increased conflict detection activity was correlated with both increased activity in the attentional control network and improved speed and accuracy from one trial to the next. These results establish a long hypothesized interaction between conflict detection systems and neural systems supporting voluntary control of visual attention.

  17. Temporal production and visuospatial processing.

    PubMed

    Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo

    2005-12-01

    Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.

  18. Causal evidence for frontal involvement in memory target maintenance by posterior brain areas during distracter interference of visual working memory

    PubMed Central

    Feredoes, Eva; Heinen, Klaartje; Weiskopf, Nikolaus; Ruff, Christian; Driver, Jon

    2011-01-01

    Dorsolateral prefrontal cortex (DLPFC) is recruited during visual working memory (WM) when relevant information must be maintained in the presence of distracting information. The mechanism by which DLPFC might ensure successful maintenance of the contents of WM is, however, unclear; it might enhance neural maintenance of memory targets or suppress processing of distracters. To adjudicate between these possibilities, we applied time-locked transcranial magnetic stimulation (TMS) during functional MRI, an approach that permits causal assessment of a stimulated brain region's influence on connected brain regions, and evaluated how this influence may change under different task conditions. Participants performed a visual WM task requiring retention of visual stimuli (faces or houses) across a delay during which visual distracters could be present or absent. When distracters were present, they were always from the opposite stimulus category, so that targets and distracters were represented in distinct posterior cortical areas. We then measured whether DLPFC-TMS, administered in the delay at the time point when distracters could appear, would modulate posterior regions representing memory targets or distracters. We found that DLPFC-TMS influenced posterior areas only when distracters were present and, critically, that this influence consisted of increased activity in regions representing the current memory targets. DLPFC-TMS did not affect regions representing current distracters. These results provide a new line of causal evidence for a top-down DLPFC-based control mechanism that promotes successful maintenance of relevant information in WM in the presence of distraction. PMID:21987824

  19. Cholinergic, But Not Dopaminergic or Noradrenergic, Enhancement Sharpens Visual Spatial Perception in Humans

    PubMed Central

    Wallace, Deanna L.

    2017-01-01

    The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568

  20. Dye-enhanced visualization of rat whiskers for behavioral studies.

    PubMed

    Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E

    2017-06-14

    Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.

  1. Visual field defects may not affect safe driving.

    PubMed

    Dow, Jamie

    2011-10-01

    In Quebec a driver whose acquired visual field defect renders them ineligible for a driver's permit renewal may request an exemption from the visual field standard by demonstrating safe driving despite the defect. For safety reasons it was decided to attempt to identify predictors of failure on the road test in order to avoid placing driving evaluators in potentially dangerous situations when evaluating drivers with visual field defects. During a 4-month period in 2009 all requests for exemptions from the visual field standard were collected and analyzed. All available medical and visual field data were collated for 103 individuals, of whom 91 successfully completed the evaluation process and obtained a waiver. The collated data included age, sex, type of visual field defect, visual field characteristics, and concomitant medical problems. No single factor, or combination of factors, could predict failure of the road test. All 5 failures of the road test had cognitive problems but 6 of the successful drivers also had known cognitive problems. Thus, cognitive problems influence the risk of failure but do not predict certain failure. Most of the applicants for an exemption were able to complete the evaluation process successfully, thereby demonstrating safe driving despite their handicap. Consequently, jurisdictions that have visual field standards for their driving permit should implement procedures to evaluate drivers with visual field defects that render them unable to meet the standard but who wish to continue driving.

  2. Sleep inertia, sleep homeostatic and circadian influences on higher-order cognitive functions.

    PubMed

    Burke, Tina M; Scheer, Frank A J L; Ronda, Joseph M; Czeisler, Charles A; Wright, Kenneth P

    2015-08-01

    Sleep inertia, sleep homeostatic and circadian processes modulate cognition, including reaction time, memory, mood and alertness. How these processes influence higher-order cognitive functions is not well known. Six participants completed a 73-day-long study that included two 14-day-long 28-h forced desynchrony protocols to examine separate and interacting influences of sleep inertia, sleep homeostasis and circadian phase on higher-order cognitive functions of inhibitory control and selective visual attention. Cognitive performance for most measures was impaired immediately after scheduled awakening and improved during the first ~2-4 h of wakefulness (decreasing sleep inertia); worsened thereafter until scheduled bedtime (increasing sleep homeostasis); and was worst at ~60° and best at ~240° (circadian modulation, with worst and best phases corresponding to ~09:00 and ~21:00 hours, respectively, in individuals with a habitual wake time of 07:00 hours). The relative influences of sleep inertia, sleep homeostasis and circadian phase depended on the specific higher-order cognitive function task examined. Inhibitory control appeared to be modulated most strongly by circadian phase, whereas selective visual attention for a spatial-configuration search task was modulated most strongly by sleep inertia. These findings demonstrate that some higher-order cognitive processes are differentially sensitive to different sleep-wake regulatory processes. Differential modulation of cognitive functions by different sleep-wake regulatory processes has important implications for understanding mechanisms contributing to performance impairments during adverse circadian phases, sleep deprivation and/or upon awakening from sleep. © 2015 European Sleep Research Society.

  3. Unsold is unseen … or is it? Examining the role of peripheral vision in the consumer choice process using eye-tracking methodology.

    PubMed

    Wästlund, Erik; Shams, Poja; Otterbring, Tobias

    2018-01-01

    In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Is visual image segmentation a bottom-up or an interactive process?

    PubMed

    Vecera, S P; Farah, M J

    1997-11-01

    Visual image segmentation is the process by which the visual system groups features that are part of a single shape. Is image segmentation a bottom-up or an interactive process? In Experiments 1 and 2, we presented subjects with two overlapping shapes and asked them to determine whether two probed locations were on the same shape or on different shapes. The availability of top-down support was manipulated by presenting either upright or rotated letters. Subjects were fastest to respond when the shapes corresponded to familiar shapes--the upright letters. In Experiment 3, we used a variant of this segmentation task to rule out the possibility that subjects performed same/different judgments after segmentation and recognition of both letters. Finally, in Experiment 4, we ruled out the possibility that the advantage for upright letters was merely due to faster recognition of upright letters relative to rotated letters. The results suggested that the previous effects were not due to faster recognition of upright letters; stimulus familiarity influenced segmentation per se. The results are discussed in terms of an interactive model of visual image segmentation.

  5. Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas

    PubMed Central

    Michalareas, Georgios; Vezoli, Julien; van Pelt, Stan; Schoffelen, Jan-Mathijs; Kennedy, Henry; Fries, Pascal

    2016-01-01

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral and dorsal stream visual areas are differentially affected by inter-areal influences in the alpha-beta band. PMID:26777277

  6. Learning to Link Visual Contours

    PubMed Central

    Li, Wu; Piëch, Valentin; Gilbert, Charles D.

    2008-01-01

    SUMMARY In complex visual scenes, linking related contour elements is important for object recognition. This process, thought to be stimulus driven and hard wired, has substrates in primary visual cortex (V1). Here, however, we find contour integration in V1 to depend strongly on perceptual learning and top-down influences that are specific to contour detection. In naive monkeys the information about contours embedded in complex backgrounds is absent in V1 neuronal responses, and is independent of the locus of spatial attention. Training animals to find embedded contours induces strong contour-related responses specific to the trained retinotopic region. These responses are most robust when animals perform the contour detection task, but disappear under anesthesia. Our findings suggest that top-down influences dynamically adapt neural circuits according to specific perceptual tasks. This may serve as a general neuronal mechanism of perceptual learning, and reflect top-down mediated changes in cortical states. PMID:18255036

  7. Genetic Architecture of the Delis-Kaplan Executive Function System Trail Making Test: Evidence for Distinct Genetic Influences on Executive Function

    PubMed Central

    Vasilopoulos, Terrie; Franz, Carol E.; Panizzon, Matthew S.; Xian, Hong; Grant, Michael D.; Lyons, Michael J; Toomey, Rosemary; Jacobson, Kristen C.; Kremen, William S.

    2012-01-01

    Objective To examine how genes and environments contribute to relationships among Trail Making test conditions and the extent to which these conditions have unique genetic and environmental influences. Method Participants included 1237 middle-aged male twins from the Vietnam-Era Twin Study of Aging (VESTA). The Delis-Kaplan Executive Function System Trail Making test included visual searching, number and letter sequencing, and set-shifting components. Results Phenotypic correlations among Trails conditions ranged from 0.29 – 0.60, and genes accounted for the majority (58–84%) of each correlation. Overall heritability ranged from 0.34 to 0.62 across conditions. Phenotypic factor analysis suggested a single factor. In contrast, genetic models revealed a single common genetic factor but also unique genetic influences separate from the common factor. Genetic variance (i.e., heritability) of number and letter sequencing was completely explained by the common genetic factor while unique genetic influences separate from the common factor accounted for 57% and 21% of the heritabilities of visual search and set-shifting, respectively. After accounting for general cognitive ability, unique genetic influences accounted for 64% and 31% of those heritabilities. Conclusions A common genetic factor, most likely representing a combination of speed and sequencing accounted for most of the correlation among Trails 1–4. Distinct genetic factors, however, accounted for a portion of variance in visual scanning and set-shifting. Thus, although traditional phenotypic shared variance analysis techniques suggest only one general factor underlying different neuropsychological functions in non-patient populations, examining the genetic underpinnings of cognitive processes with twin analysis can uncover more complex etiological processes. PMID:22201299

  8. Visual Processing Recruits the Auditory Cortices in Prelingually Deaf Children and Influences Cochlear Implant Outcomes.

    PubMed

    Liang, Maojin; Chen, Yuebo; Zhao, Fei; Zhang, Junpeng; Liu, Jiahao; Zhang, Xueyuan; Cai, Yuexin; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-09-01

    Although visual processing recruitment of the auditory cortices has been reported previously in prelingually deaf children who have a rapidly developing brain and no auditory processing, the visual processing recruitment of auditory cortices might be different in processing different visual stimuli and may affect cochlear implant (CI) outcomes. Ten prelingually deaf children, 4 to 6 years old, were recruited for the study. Twenty prelingually deaf subjects, 4 to 6 years old with CIs for 1 year, were also recruited; 10 with well-performing CIs, 10 with poorly performing CIs. Ten age and sex-matched normal-hearing children were recruited as controls. Visual ("sound" photo [photograph with imaginative sound] and "nonsound" photo [photograph without imaginative sound]) evoked potentials were measured in all subjects. P1 at Oz and N1 at the bilateral temporal-frontal areas (FC3 and FC4) were compared. N1 amplitudes were strongest in the deaf children, followed by those with poorly performing CIs, controls and those with well-performing CIs. There was no significant difference between controls and those with well-performing CIs. "Sound" photo stimuli evoked a stronger N1 than "nonsound" photo stimuli. Further analysis showed that only at FC4 in deaf subjects and those with poorly performing CIs were the N1 responses to "sound" photo stimuli stronger than those to "nonsound" photo stimuli. No significant difference was found for the FC3 and FC4 areas. No significant difference was found in N1 latencies and P1 amplitudes or latencies. The results indicate enhanced visual recruitment of the auditory cortices in prelingually deaf children. Additionally, the decrement in visual recruitment of auditory cortices was related to good CI outcomes.

  9. Video game experience and its influence on visual attention parameters: an investigation using the framework of the Theory of Visual Attention (TVA).

    PubMed

    Schubert, Torsten; Finke, Kathrin; Redel, Petra; Kluckow, Steffen; Müller, Hermann; Strobach, Tilo

    2015-05-01

    Experts with video game experience, in contrast to non-experienced persons, are superior in multiple domains of visual attention. However, it is an open question which basic aspects of attention underlie this superiority. We approached this question using the framework of Theory of Visual Attention (TVA) with tools that allowed us to assess various parameters that are related to different visual attention aspects (e.g., perception threshold, processing speed, visual short-term memory storage capacity, top-down control, spatial distribution of attention) and that are measurable on the same experimental basis. In Experiment 1, we found advantages of video game experts in perception threshold and visual processing speed; the latter being restricted to the lower positions of the used computer display. The observed advantages were not significantly moderated by general person-related characteristics such as personality traits, sensation seeking, intelligence, social anxiety, or health status. Experiment 2 tested a potential causal link between the expert advantages and video game practice with an intervention protocol. It found no effects of action video gaming on perception threshold, visual short-term memory storage capacity, iconic memory storage, top-down control, and spatial distribution of attention after 15 days of training. However, observations of a selected improvement of processing speed at the lower positions of the computer screen after video game training and of retest effects are suggestive for limited possibilities to improve basic aspects of visual attention (TVA) with practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. A theta rhythm in macaque visual cortex and its attentional modulation

    PubMed Central

    Spyropoulos, Georgios; Fries, Pascal

    2018-01-01

    Theta rhythms govern rodent sniffing and whisking, and human language processing. Human psychophysics suggests a role for theta also in visual attention. However, little is known about theta in visual areas and its attentional modulation. We used electrocorticography (ECoG) to record local field potentials (LFPs) simultaneously from areas V1, V2, V4, and TEO of two macaque monkeys performing a selective visual attention task. We found a ≈4-Hz theta rhythm within both the V1–V2 and the V4–TEO region, and theta synchronization between them, with a predominantly feedforward directed influence. ECoG coverage of large parts of these regions revealed a surprising spatial correspondence between theta and visually induced gamma. Furthermore, gamma power was modulated with theta phase. Selective attention to the respective visual stimulus strongly reduced these theta-rhythmic processes, leading to an unusually strong attention effect for V1. Microsaccades (MSs) were partly locked to theta. However, neuronal theta rhythms tended to be even more pronounced for epochs devoid of MSs. Thus, we find an MS-independent theta rhythm specific to visually driven parts of V1–V2, which rhythmically modulates local gamma and entrains V4–TEO, and which is strongly reduced by attention. We propose that the less theta-rhythmic and thereby more continuous processing of the attended stimulus serves the exploitation of this behaviorally most relevant information. The theta-rhythmic and thereby intermittent processing of the unattended stimulus likely reflects the ecologically important exploration of less relevant sources of information. PMID:29848632

  11. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight.

    PubMed

    Anders, Silke; Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-11-01

    Affective neuroscience has been strongly influenced by the view that a 'feeling' is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients' response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients' phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity.

  12. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight

    PubMed Central

    Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-01-01

    Affective neuroscience has been strongly influenced by the view that a ‘feeling’ is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients’ response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients’ phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity. PMID:19767414

  13. Immediate use of prosody and context in predicting a syntactic structure.

    PubMed

    Nakamura, Chie; Arai, Manabu; Mazuka, Reiko

    2012-11-01

    Numerous studies have reported an effect of prosodic information on parsing but whether prosody can impact even the initial parsing decision is still not evident. In a visual world eye-tracking experiment, we investigated the influence of contrastive intonation and visual context on processing temporarily ambiguous relative clause sentences in Japanese. Our results showed that listeners used the prosodic cue to make a structural prediction before hearing disambiguating information. Importantly, the effect was limited to cases where the visual scene provided an appropriate context for the prosodic cue, thus eliminating the explanation that listeners have simply associated marked prosodic information with a less frequent structure. Furthermore, the influence of the prosodic information was also evident following disambiguating information, in a way that reflected the initial analysis. The current study demonstrates that prosody, when provided with an appropriate context, influences the initial syntactic analysis and also the subsequent cost at disambiguating information. The results also provide first evidence for pre-head structural prediction driven by prosodic and contextual information with a head-final construction. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Opposing effects of attention and consciousness on afterimages

    PubMed Central

    van Boxtel, Jeroen J. A.; Tsuchiya, Naotsugu; Koch, Christof

    2010-01-01

    The brain's ability to handle sensory information is influenced by both selective attention and consciousness. There is no consensus on the exact relationship between these two processes and whether they are distinct. So far, no experiment has simultaneously manipulated both. We carried out a full factorial 2 × 2 study of the simultaneous influences of attention and consciousness (as assayed by visibility) on perception, correcting for possible concurrent changes in attention and consciousness. We investigated the duration of afterimages for all four combinations of high versus low attention and visible versus invisible. We show that selective attention and visual consciousness have opposite effects: paying attention to the grating decreases the duration of its afterimage, whereas consciously seeing the grating increases the afterimage duration. These findings provide clear evidence for distinctive influences of selective attention and consciousness on visual perception. PMID:20424112

  15. Visual Contrast Sensitivity Improvement by Right Frontal High-Beta Activity Is Mediated by Contrast Gain Mechanisms and Influenced by Fronto-Parietal White Matter Microstructure

    PubMed Central

    Quentin, Romain; Elkin Frankston, Seth; Vernet, Marine; Toba, Monica N.; Bartolomeo, Paolo; Chanes, Lorena; Valero-Cabré, Antoni

    2016-01-01

    Behavioral and electrophysiological studies in humans and non-human primates have correlated frontal high-beta activity with the orienting of endogenous attention and shown the ability of the latter function to modulate visual performance. We here combined rhythmic transcranial magnetic stimulation (TMS) and diffusion imaging to study the relation between frontal oscillatory activity and visual performance, and we associated these phenomena to a specific set of white matter pathways that in humans subtend attentional processes. High-beta rhythmic activity on the right frontal eye field (FEF) was induced with TMS and its causal effects on a contrast sensitivity function were recorded to explore its ability to improve visual detection performance across different stimulus contrast levels. Our results show that frequency-specific activity patterns engaged in the right FEF have the ability to induce a leftward shift of the psychometric function. This increase in visual performance across different levels of stimulus contrast is likely mediated by a contrast gain mechanism. Interestingly, microstructural measures of white matter connectivity suggest a strong implication of right fronto-parietal connectivity linking the FEF and the intraparietal sulcus in propagating high-beta rhythmic signals across brain networks and subtending top-down frontal influences on visual performance. PMID:25899709

  16. Impact of Noise and Working Memory on Speech Processing in Adults with and without ADHD

    ERIC Educational Resources Information Center

    Michalek, Anne M. P.

    2012-01-01

    Auditory processing of speech is influenced by internal (i.e., attention, working memory) and external factors (i.e., background noise, visual information). This study examined the interplay among these factors in individuals with and without ADHD. All participants completed a listening in noise task, two working memory capacity tasks, and two…

  17. Complexity and Hemispheric Abilities: Evidence for a Differential Impact on Semantics and Phonology

    ERIC Educational Resources Information Center

    Tremblay, Tania; Monetta, Laura; Joanette, Yves

    2009-01-01

    The main goal of this study was to determine whether the phonological and semantic processing of words are similarly influenced by an increase in processing complexity. Thirty-six French-speaking young adults performed both semantic and phonological word judgment tasks, using a divided visual field procedure. The phonological complexity of words…

  18. Olfactory discrimination: when vision matters?

    PubMed

    Demattè, M Luisa; Sanabria, Daniel; Spence, Charles

    2009-02-01

    Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.

  19. Goal-Directed Visual Processing Differentially Impacts Human Ventral and Dorsal Visual Representations

    PubMed Central

    2017-01-01

    Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what” and “where” information in both pathways. Is the two-pathway distinction still valid? Here, we examined how goal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments on human participants (57% females), by manipulating whether color or shape was task-relevant and how they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions. We found that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however, tended to decrease as task-relevant and irrelevant features were more integrated, reflecting the well-known object-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual information representation. SIGNIFICANCE STATEMENT Visual information is thought to be processed in two distinctive pathways: the ventral pathway that processes “what” an object is and the dorsal pathway that processes “where” it is located. This view has been challenged by recent studies revealing the existence of “what” and “where” information in both pathways. Here, we found that goal-directed visual information processing differentially modulates shape-based object category representations in the two pathways. Whereas ventral representations are more invariant to the demand of the task, reflecting what an object is, dorsal representations are more adaptive, reflecting what we do with the object. Thus, despite the existence of “what” and “where” information in both pathways, visual representations may still differ fundamentally in the two pathways. PMID:28821655

  20. Relative Spatial Frequency Processing Drives Hemispheric Asymmetry in Conscious Awareness

    PubMed Central

    Piazza, Elise A.; Silver, Michael A.

    2017-01-01

    Visual stimuli with different spatial frequencies (SFs) are processed asymmetrically in the two cerebral hemispheres. Specifically, low SFs are processed relatively more efficiently in the right hemisphere than the left hemisphere, whereas high SFs show the opposite pattern. In this study, we ask whether these differences between the two hemispheres reflect a low-level division that is based on absolute SF values or a flexible comparison of the SFs in the visual environment at any given time. In a recent study, we showed that conscious awareness of SF information (i.e., visual perceptual selection from multiple SFs simultaneously present in the environment) differs between the two hemispheres. Building upon that result, here we employed binocular rivalry to test whether this hemispheric asymmetry is due to absolute or relative SF processing. In each trial, participants viewed a pair of rivalrous orthogonal gratings of different SFs, presented either to the left or right of central fixation, and continuously reported which grating they perceived. We found that the hemispheric asymmetry in perception is significantly influenced by relative processing of the SFs of the simultaneously presented stimuli. For example, when a medium SF grating and a higher SF grating were presented as a rivalry pair, subjects were more likely to report that they initially perceived the medium SF grating when the rivalry pair was presented in the left visual hemifield (right hemisphere), compared to the right hemifield. However, this same medium SF grating, when it was paired in rivalry with a lower SF grating, was more likely to be perceptually selected when it was in the right visual hemifield (left hemisphere). Thus, the visual system’s classification of a given SF as “low” or “high” (and therefore, which hemisphere preferentially processes that SF) depends on the other SFs that are present, demonstrating that relative SF processing contributes to hemispheric differences in visual perceptual selection. PMID:28469585

  1. Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods.

    PubMed

    Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel

    2017-01-01

    Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.

  2. Population Response Profiles in Early Visual Cortex Are Biased in Favor of More Valuable Stimuli

    PubMed Central

    Saproo, Sameer

    2010-01-01

    Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population response profile could theoretically improve the probability of correctly discriminating high-value stimuli from low-value alternatives. PMID:20410360

  3. Dissociable effects of top-down and bottom-up attention during episodic encoding

    PubMed Central

    Uncapher, Melina R.; Hutchinson, J. Benjamin; Wagner, Anthony D.

    2011-01-01

    It is well established that the formation of memories for life’s experiences—episodic memory—is influenced by how we attend to those experiences, yet the neural mechanisms by which attention shapes episodic encoding are still unclear. We investigated how top-down and bottom-up attention contribute to memory encoding of visual objects in humans by manipulating both types of attention during functional magnetic resonance imaging (fMRI) of episodic memory formation. We show that dorsal parietal cortex—specifically, intraparietal sulcus (IPS)—was engaged during top-down attention and was also recruited during the successful formation of episodic memories. By contrast, bottom-up attention engaged ventral parietal cortex—specifically, temporoparietal junction (TPJ)—and was also more active during encoding failure. Functional connectivity analyses revealed further dissociations in how top-down and bottom-up attention influenced encoding: while both IPS and TPJ influenced activity in perceptual cortices thought to represent the information being encoded (fusiform/lateral occipital cortex), they each exerted opposite effects on memory encoding. Specifically, during a preparatory period preceding stimulus presentation, a stronger drive from IPS was associated with a higher likelihood that the subsequently attended stimulus would be encoded. By contrast, during stimulus processing, stronger connectivity with TPJ was associated with a lower likelihood the stimulus would be successfully encoded. These findings suggest that during encoding of visual objects into episodic memory, top-down and bottom-up attention can have opposite influences on perceptual areas that subserve visual object representation, suggesting that one manner in which attention modulates memory is by altering the perceptual processing of to-be-encoded stimuli. PMID:21880922

  4. Changes in the distribution of sustained attention alter the perceived structure of visual space.

    PubMed

    Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael

    2017-02-01

    Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.

  5. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    PubMed

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Visual context modulates potentiation of grasp types during semantic object categorization.

    PubMed

    Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J

    2014-06-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.

  7. Visual context modulates potentiation of grasp types during semantic object categorization

    PubMed Central

    Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.

    2013-01-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270

  8. Cultural differences in visual object recognition in 3-year-old children

    PubMed Central

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  9. Cultural differences in visual object recognition in 3-year-old children.

    PubMed

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Perceptual context and individual differences in the language proficiency of preschool children.

    PubMed

    Banai, Karen; Yifat, Rachel

    2016-02-01

    Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    PubMed

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  12. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  13. Surfing a spike wave down the ventral stream.

    PubMed

    VanRullen, Rufin; Thorpe, Simon J

    2002-10-01

    Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.

  14. Visualization of electrolyte filling process and influence of vacuum during filling for hard case prismatic lithium ion cells by neutron imaging to optimize the production process

    NASA Astrophysics Data System (ADS)

    Weydanz, W. J.; Reisenweber, H.; Gottschalk, A.; Schulz, M.; Knoche, T.; Reinhart, G.; Masuch, M.; Franke, J.; Gilles, R.

    2018-03-01

    The process of filling electrolyte into lithium ion cells is time consuming and critical to the overall battery quality. However, this process is not well understood. This is partially due to the fact, that it is hard to observe it in situ. A powerful tool for visualization of the process is neutron imaging. The filling and wetting process of the electrode stack can be clearly visualized in situ without destruction of the actual cell. The wetting of certain areas inside the electrode stack can clearly be seen when using this technique. Results showed that wetting of the electrode stack takes place in a mostly isotropic manner from the outside towards a center point of the cell at very similar speed. When the electrolyte reaches the center point, the wetting process can be considered complete. The electrode wetting is a slow but rather steady process for hard case prismatic cells. It starts with a certain speed, which is reduced over the progress of the wetting. Vacuum can assist the process and accelerate it by about a factor of two as was experimentally shown. This gives a considerable time and cost advantage for designing the production process for large-scale battery cell production.

  15. Elevated arousal levels enhance contrast perception.

    PubMed

    Kim, Dongho; Lokey, Savannah; Ling, Sam

    2017-02-01

    Our state of arousal fluctuates from moment to moment-fluctuations that can have profound impacts on behavior. Arousal has been proposed to play a powerful, widespread role in the brain, influencing processes as far ranging as perception, memory, learning, and decision making. Although arousal clearly plays a critical role in modulating behavior, the mechanisms underlying this modulation remain poorly understood. To address this knowledge gap, we examined the modulatory role of arousal on one of the cornerstones of visual perception: contrast perception. Using a reward-driven paradigm to manipulate arousal state, we discovered that elevated arousal state substantially enhances visual sensitivity, incurring a multiplicative modulation of contrast response. Contrast defines vision, determining whether objects appear visible or invisible to us, and these results indicate that one of the consequences of decreased arousal state is an impaired ability to visually process our environment.

  16. Do attentional capacities and processing speed mediate the effect of age on executive functioning?

    PubMed

    Gilsoul, Jessica; Simon, Jessica; Hogge, Michaël; Collette, Fabienne

    2018-02-06

    The executive processes are well known to decline with age, and similar data also exists for attentional capacities and processing speed. Therefore, we investigated whether these two last nonexecutive variables would mediate the effect of age on executive functions (inhibition, shifting, updating, and dual-task coordination). We administered a large battery of executive, attentional and processing speed tasks to 104 young and 71 older people, and we performed mediation analyses with variables showing a significant age effect. All executive and processing speed measures showed age-related effects while only the visual scanning task performance (selective attention) was explained by age when controlled for gender and educational level. Regarding mediation analyses, visual scanning partially mediated the age effect on updating while processing speed partially mediated the age effect on shifting, updating and dual-task coordination. In a more exploratory way, inhibition was also found to partially mediate the effect of age on the three other executive functions. Attention did not greatly influence executive functioning in aging while, in agreement with the literature, processing speed seems to be a major mediator of the age effect on these processes. Interestingly, the global pattern of results seems also to indicate an influence of inhibition but further studies are needed to confirm the role of that variable as a mediator and its relative importance by comparison with processing speed.

  17. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  18. The Presentation Location of the Reference Stimuli Affects the Left-Side Bias in the Processing of Faces and Chinese Characters

    PubMed Central

    Li, Chenglin; Cao, Xiaohua

    2017-01-01

    For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different. PMID:29018391

  19. The Presentation Location of the Reference Stimuli Affects the Left-Side Bias in the Processing of Faces and Chinese Characters.

    PubMed

    Li, Chenglin; Cao, Xiaohua

    2017-01-01

    For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different.

  20. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    PubMed

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  1. Reduced effects of pictorial distinctiveness on false memory following dynamic visual noise.

    PubMed

    Parker, Andrew; Kember, Timothy; Dagnall, Neil

    2017-07-01

    High levels of false recognition for non-presented items typically occur following exposure to lists of associated words. These false recognition effects can be reduced by making the studied items more distinctive by the presentation of pictures during encoding. One explanation of this is that during recognition, participants expect or attempt to retrieve distinctive pictorial information in order to evaluate the study status of the test item. If this involves the retrieval and use of visual imagery, then interfering with imagery processing should reduce the effectiveness of pictorial information in false memory reduction. In the current experiment, visual-imagery processing was disrupted at retrieval by the use of dynamic visual noise (DVN). It was found that effects of DVN dissociated true from false memory. Memory for studied words was not influenced by the presence of an interfering noise field. However, false memory was increased and the effects of picture-induced distinctiveness was eliminated. DVN also increased false recollection and remember responses to unstudied items.

  2. Disgust evoked by strong wormwood bitterness influences the processing of visual food cues in women: An ERP study.

    PubMed

    Schwab, Daniela; Giraldo, Matteo; Spiegl, Benjamin; Schienle, Anne

    2017-01-01

    The perception of intense bitterness is associated with disgust and food rejection. The present cross-modal event-related potential (ERP) study investigated whether a bitter aftertaste is able to influence affective ratings and the neuronal processing of visual food cues. We presented 39 healthy normal-weight women (mean age: 22.5 years) with images depicting high-caloric meat dishes, high-caloric sweets, and low-caloric vegetables after they had either rinsed their mouth with wormwood tea (bitter group; n = 20) or water (control group; n = 19) for 30s. The bitter aftertaste of wormwood enhanced fronto-central early potentials (N100, N200) and reduced P300 amplitudes for all food types (meat, sweets, vegetables). Moreover, meat and sweets elicited higher fronto-central LPPs than vegetables in the water group. This differentiation was absent in the bitter group, which gave lower arousal ratings for the high-caloric food. We found that a minor intervention ('bitter rinse') was sufficient to induce changes in the neuronal processing of food images reflecting increased early attention (N100, N200) as well as reduced affective value (P300, LPP). Future studies should investigate whether this intervention is able to influence eating behavior. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Influence of hypoglycaemia, with or without caffeine ingestion, on visual sensation and performance.

    PubMed

    Owen, G; Watson, J; McGown, A; Sharma, S; Deary, I; Kerr, D; Barrett, G

    2001-06-01

    Full-field visual evoked potentials and visual information processing were measured in 16 normal, healthy subjects during a hyperinsulinaemic clamp. A randomized cross-over design was used across three conditions: hypoglycaemia and caffeine; hypoglycaemia and placebo; and euglycaemia and caffeine. The latency of the P100 component of the pattern-reversal visual evoked potential increased significantly from rest to hypoglycaemia, but no effect of caffeine was found. Subjects were subsequently divided into two median groups based on the increase in P100 latency in the placebo condition (Group 1, +0.5 ms; Group 2, +5.6 ms). In the absence of caffeine, an inverse correlation between the increase in P100 latency from rest and a deterioration in visual movement detection was found for Group 2, but not for Group 1. Caffeine ingestion resulted in a further increase in P100 latency, from rest to hypoglycaemia, for subjects in Group 2. Hypoglycaemia in the absence of caffeine produces changes in visual sensation from rest to hypoglycaemia. In those subjects most sensitive to the effects of hypoglycaemia (Group 2), the increase in P100 latency was associated with poorer performance in tests of visual information processing. Caffeine ingestion produced further increases in P100 latency in these subjects.

  4. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    PubMed

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  5. Perceptual integration of motion and form information: evidence of parallel-continuous processing.

    PubMed

    von Mühlenen, A; Müller, H J

    2000-04-01

    In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).

  6. Task demands determine comparison strategy in whole probe change detection.

    PubMed

    Udale, Rob; Farrell, Simon; Kent, Chris

    2018-05-01

    Detecting a change in our visual world requires a process that compares the external environment (test display) with the contents of memory (study display). We addressed the question of whether people strategically adapt the comparison process in response to different decision loads. Study displays of 3 colored items were presented, followed by 'whole-display' probes containing 3 colored shapes. Participants were asked to decide whether any probed items contained a new feature. In Experiments 1-4, irrelevant changes to the probed item's locations or feature bindings influenced memory performance, suggesting that participants employed a comparison process that relied on spatial locations. This finding occurred irrespective of whether participants were asked to decide about the whole display, or only a single cued item within the display. In Experiment 5, when the base-rate of changes in the nonprobed items increased (increasing the incentive to use the cue effectively), participants were not influenced by irrelevant changes in location or feature bindings. In addition, we observed individual differences in the use of spatial cues. These results suggest that participants can flexibly switch between spatial and nonspatial comparison strategies, depending on interactions between individual differences and task demand factors. These findings have implications for models of visual working memory that assume that the comparison between study and test obligatorily relies on accessing visual features via their binding to location. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Auditory and visual connectivity gradients in frontoparietal cortex

    PubMed Central

    Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304

  8. Visually Evoked 3-5 Hz Membrane Potential Oscillations Reduce the Responsiveness of Visual Cortex Neurons in Awake Behaving Mice.

    PubMed

    Einstein, Michael C; Polack, Pierre-Olivier; Tran, Duy T; Golshani, Peyman

    2017-05-17

    Low-frequency membrane potential ( V m ) oscillations were once thought to only occur in sleeping and anesthetized states. Recently, low-frequency V m oscillations have been described in inactive awake animals, but it is unclear whether they shape sensory processing in neurons and whether they occur during active awake behavioral states. To answer these questions, we performed two-photon guided whole-cell V m recordings from primary visual cortex layer 2/3 excitatory and inhibitory neurons in awake mice during passive visual stimulation and performance of visual and auditory discrimination tasks. We recorded stereotyped 3-5 Hz V m oscillations where the V m baseline hyperpolarized as the V m underwent high amplitude rhythmic fluctuations lasting 1-2 s in duration. When 3-5 Hz V m oscillations coincided with visual cues, excitatory neuron responses to preferred cues were significantly reduced. Despite this disruption to sensory processing, visual cues were critical for evoking 3-5 Hz V m oscillations when animals performed discrimination tasks and passively viewed drifting grating stimuli. Using pupillometry and animal locomotive speed as indicators of arousal, we found that 3-5 Hz oscillations were not restricted to unaroused states and that they occurred equally in aroused and unaroused states. Therefore, low-frequency V m oscillations play a role in shaping sensory processing in visual cortical neurons, even during active wakefulness and decision making. SIGNIFICANCE STATEMENT A neuron's membrane potential ( V m ) strongly shapes how information is processed in sensory cortices of awake animals. Yet, very little is known about how low-frequency V m oscillations influence sensory processing and whether they occur in aroused awake animals. By performing two-photon guided whole-cell recordings from layer 2/3 excitatory and inhibitory neurons in the visual cortex of awake behaving animals, we found visually evoked stereotyped 3-5 Hz V m oscillations that disrupt excitatory responsiveness to visual stimuli. Moreover, these oscillations occurred when animals were in high and low arousal states as measured by animal speed and pupillometry. These findings show, for the first time, that low-frequency V m oscillations can significantly modulate sensory signal processing, even in awake active animals. Copyright © 2017 the authors 0270-6474/17/375084-15$15.00/0.

  9. Visual Functions of the Thalamus

    PubMed Central

    Usrey, W. Martin; Alitto, Henry J.

    2017-01-01

    The thalamus is the heavily interconnected partner of the neocortex. All areas of the neocortex receive afferent input from and send efferent projections to specific thalamic nuclei. Through these connections, the thalamus serves to provide the cortex with sensory input, and to facilitate interareal cortical communication and motor and cognitive functions. In the visual system, the lateral geniculate nucleus (LGN) of the dorsal thalamus is the gateway through which visual information reaches the cerebral cortex. Visual processing in the LGN includes spatial and temporal influences on visual signals that serve to adjust response gain, transform the temporal structure of retinal activity patterns, and increase the signal-to-noise ratio of the retinal signal while preserving its basic content. This review examines recent advances in our understanding of LGN function and circuit organization and places these findings in a historical context. PMID:28217740

  10. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Treesearch

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  11. Masked Translation Priming Effects in Visual Word Recognition by Trilinguals

    ERIC Educational Resources Information Center

    Aparicio, Xavier; Lavaur, Jean-Marc

    2016-01-01

    The present study aims to investigate how trilinguals process their two non-dominant languages and how those languages influence one another, as well as the relative importance of the dominant language on their processing. With this in mind, 24 French (L1)- English (L2)- and Spanish (L3)-unbalanced trilinguals, deemed equivalent in their L2 and L3…

  12. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  13. Motivation enhances visual working memory capacity through the modulation of central cognitive processes.

    PubMed

    Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu

    2013-09-01

    Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.

  14. The effect of observational learning on students' performance, processes, and motivation in two creative domains.

    PubMed

    Groenendijk, Talita; Janssen, Tanja; Rijlaarsdam, Gert; van den Bergh, Huub

    2013-03-01

    Previous research has shown that observation can be effective for learning in various domains, for example, argumentative writing and mathematics. The question in this paper is whether observational learning can also be beneficial when learning to perform creative tasks in visual and verbal arts. We hypothesized that observation has a positive effect on performance, process, and motivation. We expected similarity in competence between the model and the observer to influence the effectiveness of observation. Sample.  A total of 131 Dutch students (10(th) grade, 15 years old) participated. Two experiments were carried out (one for visual and one for verbal arts). Participants were randomly assigned to one of three conditions; two observational learning conditions and a control condition (learning by practising). The observational learning conditions differed in instructional focus (on the weaker or the more competent model of a pair to be observed). We found positive effects of observation on creative products, creative processes, and motivation in the visual domain. In the verbal domain, observation seemed to affect the creative process, but not the other variables. The model similarity hypothesis was not confirmed. Results suggest that observation may foster learning in creative domains, especially in the visual arts. © 2011 The British Psychological Society.

  15. Cognitive and artificial representations in handwriting recognition

    NASA Astrophysics Data System (ADS)

    Lenaghan, Andrew P.; Malyan, Ron

    1996-03-01

    Both cognitive processes and artificial recognition systems may be characterized by the forms of representation they build and manipulate. This paper looks at how handwriting is represented in current recognition systems and the psychological evidence for its representation in the cognitive processes responsible for reading. Empirical psychological work on feature extraction in early visual processing is surveyed to show that a sound psychological basis for feature extraction exists and to describe the features this approach leads to. The first stage of the development of an architecture for a handwriting recognition system which has been strongly influenced by the psychological evidence for the cognitive processes and representations used in early visual processing, is reported. This architecture builds a number of parallel low level feature maps from raw data. These feature maps are thresholded and a region labeling algorithm is used to generate sets of features. Fuzzy logic is used to quantify the uncertainty in the presence of individual features.

  16. Improvement in visual search with practice: mapping learning-related changes in neurocognitive stages of processing.

    PubMed

    Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G

    2015-04-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. Copyright © 2015 the authors 0270-6474/15/355351-09$15.00/0.

  17. Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    PubMed Central

    Rigoulot, Simon; Pell, Marc D.

    2012-01-01

    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454

  18. Task-set inertia and memory-consolidation bottleneck in dual tasks.

    PubMed

    Koch, Iring; Rumiati, Raffaella I

    2006-11-01

    Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.

  19. Sensory system plasticity in a visually specialized, nocturnal spider.

    PubMed

    Stafstrom, Jay A; Michalik, Peter; Hebets, Eileen A

    2017-04-21

    The interplay between an animal's environmental niche and its behavior can influence the evolutionary form and function of its sensory systems. While intraspecific variation in sensory systems has been documented across distant taxa, fewer studies have investigated how changes in behavior might relate to plasticity in sensory systems across developmental time. To investigate the relationships among behavior, peripheral sensory structures, and central processing regions in the brain, we take advantage of a dramatic within-species shift of behavior in a nocturnal, net-casting spider (Deinopis spinosa), where males cease visually-mediated foraging upon maturation. We compared eye diameters and brain region volumes across sex and life stage, the latter through micro-computed X-ray tomography. We show that mature males possess altered peripheral visual morphology when compared to their juvenile counterparts, as well as juvenile and mature females. Matching peripheral sensory structure modifications, we uncovered differences in relative investment in both lower-order and higher-order processing regions in the brain responsible for visual processing. Our study provides evidence for sensory system plasticity when individuals dramatically change behavior across life stages, uncovering new avenues of inquiry focusing on altered reliance of specific sensory information when entering a new behavioral niche.

  20. Lateralized hybrid faces: evidence of a valence-specific bias in the processing of implicit emotions.

    PubMed

    Prete, Giulia; Laeng, Bruno; Tommasi, Luca

    2014-01-01

    It is well known that hemispheric asymmetries exist for both the analyses of low-level visual information (such as spatial frequency) and high-level visual information (such as emotional expressions). In this study, we assessed which of the above factors underlies perceptual laterality effects with "hybrid faces": a type of stimulus that allows testing for unaware processing of emotional expressions, when the emotion is displayed in the low-frequency information while an image of the same face with a neutral expression is superimposed to it. Despite hybrid faces being perceived as neutral, the emotional information modulates observers' social judgements. In the present study, participants were asked to assess friendliness of hybrid faces displayed tachistoscopically, either centrally or laterally to fixation. We found a clear influence of the hidden emotions also with lateral presentations. Happy faces were rated as more friendly and angry faces as less friendly with respect to neutral faces. In general, hybrid faces were evaluated as less friendly when they were presented in the left visual field/right hemisphere than in the right visual field/left hemisphere. The results extend the validity of the valence hypothesis in the specific domain of unaware (subcortical) emotion processing.

  1. Nonvisual influences on visual-information processing in the superior colliculus.

    PubMed

    Stein, B E; Jiang, W; Wallace, M T; Stanford, T R

    2001-01-01

    Although visually responsive neurons predominate in the deep layers of the superior colliculus (SC), the majority of them also receive sensory inputs from nonvisual sources (i.e. auditory and/or somatosensory). Most of these 'multisensory' neurons are able to synthesize their cross-modal inputs and, as a consequence, their responses to visual stimuli can be profoundly enhanced or depressed in the presence of a nonvisual cue. Whether response enhancement or response depression is produced by this multisensory interaction is predictable based on several factors. These include: the organization of a neuron's visual and nonvisual receptive fields; the relative spatial relationships of the different stimuli (to their respective receptive fields and to one another); and whether or not the neuron is innervated by a select population of cortical neurons. The response enhancement or depression of SC neurons via multisensory integration has significant survival value via its profound impact on overt attentive/orientation behaviors. Nevertheless, these multisensory processes are not present at birth, and require an extensive period of postnatal maturation. It seems likely that the sensory experiences obtained during this period play an important role in crafting the processes underlying these multisensory interactions.

  2. Selection-for-action in visual search.

    PubMed

    Hannus, Aave; Cornelissen, Frans W; Lindemann, Oliver; Bekkering, Harold

    2005-01-01

    Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.

  3. Influence of Interpretation Aids on Attentional Capture, Visual Processing, and Understanding of Front-of-Package Nutrition Labels.

    PubMed

    Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón

    2015-01-01

    To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  4. Influence of habitat degradation on fish replenishment

    NASA Astrophysics Data System (ADS)

    McCormick, M. I.; Moore, J. A. Y.; Munday, P. L.

    2010-09-01

    Temperature-induced coral bleaching is a major threat to the biodiversity of coral reef ecosystems. While reductions in species diversity and abundance of fish communities have been documented following coral bleaching, the mechanisms that underlie these changes are poorly understood. The present study examined the impacts of coral bleaching on the early life-history processes of coral reef fishes. Daily monitoring of fish settlement patterns found that ten times as many fish settled to healthy coral than sub-lethally bleached coral. Species diversity of settling fishes was least on bleached coral and greatest on dead coral, with healthy coral having intermediate levels of diversity. Laboratory experiments using light-trap caught juveniles showed that different damselfish species chose among healthy, bleached and dead coral habitats using different combinations of visual and olfactory cues. The live coral specialist, Pomacentrus moluccensis, preferred live coral and avoided bleached and dead coral, using mostly visual cues to inform their habitat choice. The habitat generalist, Pomacentrus amboinensis, also preferred live coral and avoided bleached and dead coral but selected these habitats using both visual and olfactory cues. Trials with another habitat generalist, Dischistodus sp., suggested that vision played a significant role. A 20 days field experiment that manipulated densities of P. moluccensis on healthy and bleached coral heads found an influence of fish density on juvenile weight and growth, but no significant influence of habitat quality. These results suggests that coral bleaching will affect settlement patterns and species distributions by influencing the visual and olfactory cues that reef fish larvae use to make settlement choices. Furthermore, increased fish density within the remaining healthy coral habitats could play an important role in influencing population dynamics.

  5. Age-related changes in visual exploratory behavior in a natural scene setting

    PubMed Central

    Hamel, Johanna; De Beukelaer, Sophie; Kraft, Antje; Ohl, Sven; Audebert, Heinrich J.; Brandt, Stephan A.

    2013-01-01

    Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view). To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age, and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game-experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media. PMID:23801970

  6. Social Vision: Functional Forecasting and the Integration of Compound Social Cues

    PubMed Central

    Adams, Reginald B.; Kveraga, Kestutis

    2017-01-01

    For decades the study of social perception was largely compartmentalized by type of social cue: race, gender, emotion, eye gaze, body language, facial expression etc. This was partly due to good scientific practice (e.g., controlling for extraneous variability), and partly due to assumptions that each type of social cue was functionally distinct from others. Herein, we present a functional forecast approach to understanding compound social cue processing that emphasizes the importance of shared social affordances across various cues (see too Adams, Franklin, Nelson, & Stevenson, 2010; Adams & Nelson, 2011; Weisbuch & Adams, 2012). We review the traditional theories of emotion and face processing that argued for dissociable and noninteracting pathways (e.g., for specific emotional expressions, gaze, identity cues), as well as more recent evidence for combinatorial processing of social cues. We argue here that early, and presumably reflexive, visual integration of such cues is necessary for adaptive behavioral responding to others. In support of this claim, we review contemporary work that reveals a flexible visual system, one that readily incorporates meaningful contextual influences in even nonsocial visual processing, thereby establishing the functional and neuroanatomical bases necessary for compound social cue integration. Finally, we explicate three likely mechanisms driving such integration. Together, this work implicates a role for cognitive penetrability in visual perceptual abilities that have often been (and in some cases still are) ascribed to direct encapsulated perceptual processes. PMID:29242738

  7. Foveal analysis and peripheral selection during active visual sampling

    PubMed Central

    Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.

    2014-01-01

    Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588

  8. Multimodal lexical processing in auditory cortex is literacy skill dependent.

    PubMed

    McNorgan, Chris; Awati, Neha; Desroches, Amy S; Booth, James R

    2014-09-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others demonstrate recruitment of primary and associative auditory cortex during cross-modal processing. We sought to determine whether brain regions supporting phonological processing of larger lexical units (monosyllabic words) over larger time windows is sensitive to cross-modal information, and whether such effects are literacy dependent. Twenty-two children (age 8-14 years) made rhyming judgments for sequentially presented word and pseudoword pairs presented either unimodally (auditory- or visual-only) or cross-modally (audiovisual). Regression analyses examined the relationship between literacy and congruency effects (overlapping orthography and phonology vs. overlapping phonology-only). We extend previous findings by showing that higher literacy is correlated with greater congruency effects in auditory cortex (i.e., planum temporale) only for cross-modal processing. These skill effects were specific to known words and occurred over a large time window, suggesting that multimodal integration in posterior auditory cortex is critical for fluent reading. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Unconscious cues bias first saccades in a free-saccade task.

    PubMed

    Huang, Yu-Feng; Tan, Edlyn Gui Fang; Soon, Chun Siong; Hsieh, Po-Jang

    2014-10-01

    Visual-spatial attention can be biased towards salient visual information without visual awareness. It is unclear, however, whether such bias can further influence free-choices such as saccades in a free viewing task. In our experiment, we presented visual cues below awareness threshold immediately before people made free saccades. Our results showed that masked cues could influence the direction and latency of the first free saccade, suggesting that salient visual information can unconsciously influence free actions. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Selective attention determines emotional responses to novel visual stimuli.

    PubMed

    Raymond, Jane E; Fenske, Mark J; Tavassoli, Nader T

    2003-11-01

    Distinct complex brain systems support selective attention and emotion, but connections between them suggest that human behavior should reflect reciprocal interactions of these systems. Although there is ample evidence that emotional stimuli modulate attentional processes, it is not known whether attention influences emotional behavior. Here we show that evaluation of the emotional tone (cheery/dreary) of complex but meaningless visual patterns can be modulated by the prior attentional state (attending vs. ignoring) used to process each pattern in a visual selection task. Previously ignored patterns were evaluated more negatively than either previously attended or novel patterns. Furthermore, this emotional devaluation of distracting stimuli was robust across different emotional contexts and response scales. Finding that negative affective responses are specifically generated for ignored stimuli points to a new functional role for attention and elaborates the link between attention and emotion. This finding also casts doubt on the conventional marketing wisdom that any exposure is good exposure.

  11. Usability and Visual Communication for Southern California Tsunami Evacuation Information: The importance of information design in disaster risk management

    NASA Astrophysics Data System (ADS)

    Jaenichen, C.; Schandler, S.; Wells, M.; Danielsen, T.

    2015-12-01

    Evacuation behavior, including participation and response, is rarely an individual and isolated process and the outcomes are usually systemic. Ineffective evacuation information can easily attribute to delayed evacuation response. Delays increase demands on already extended emergency personal, increase the likelihood of traffic congestion, and can cause harm to self and property. From an information design perspective, addressing issues in cognitive recall and emergency psychology, this case study examines evacuation messaging including written, audio, and visual presentation of information, and describes the application of design principles and role of visual communication for Southern California tsunami evacuation outreach. The niche of this project is the inclusion of cognitive processing as the driving influence when making formal design decisions and measurable data from a 4-year cognitive recall study to support the solution. Image included shows a tsunami evacaution map before and after the redesign.

  12. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  13. Inversion effect in the visual processing of Chinese character: an fMRI study.

    PubMed

    Zhao, Jizheng; Liu, Jiangang; Li, Jun; Liang, Jimin; Feng, Lu; Ai, Lin; Tian, Jie

    2010-07-05

    Chinese people engage long-term processing of characters. It has been demonstrated that the presented orientation affects the perception of several types of stimuli when people have possessed expertise with them, e.g. face, body, and scene. However, the influence of inversion on the neural mechanism of Chinese character processing has not been sufficiently discussed. In the present study, a functional magnetic resonance imaging (fMRI) experiment is performed to examine the effect of inversion on Chinese character processing, which employs Chinese character, face and house as stimuli. The region of interest analysis demonstrates inversion leads to neural response increases for Chinese character in left fusiform character-preferential area, bilateral fusiform object-preferential area and bilateral occipital object-preferential area, and such inversion-caused changes in the response pattern of characters processing are highly similar to those of faces processing but quiet different from those of houses processing. Whole brain analysis reveals the upright characters recruit several language regions for phonology and semantic processing, however, the inverted characters activated extensive regions related to the visual information processing. Our findings reveal a shift from the character-preferential processing route to the generic object processing steam within visual cortex when the characters are inverted, and suggest that different mechanisms may underlie the upright and the inverted Chinese character, respectively. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Can Driving-Simulator Training Enhance Visual Attention, Cognition, and Physical Functioning in Older Adults?

    PubMed

    Haeger, Mathias; Bock, Otmar; Memmert, Daniel; Hüttermann, Stefanie

    2018-01-01

    Virtual reality offers a good possibility for the implementation of real-life tasks in a laboratory-based training or testing scenario. Thus, a computerized training in a driving simulator offers an ecological valid training approach. Visual attention had an influence on driving performance, so we used the reverse approach to test the influence of a driving training on visual attention and executive functions. Thirty-seven healthy older participants (mean age: 71.46 ± 4.09; gender: 17 men and 20 women) took part in our controlled experimental study. We examined transfer effects from a four-week driving training (three times per week) on visual attention, executive function, and motor skill. Effects were analyzed using an analysis of variance with repeated measurements. Therefore, main factors were group and time to show training-related benefits of our intervention. Results revealed improvements for the intervention group in divided visual attention; however, there were benefits neither in the other cognitive domains nor in the additional motor task. Thus, there are no broad training-induced transfer effects from such an ecologically valid training regime. This lack of findings could be addressed to insufficient training intensities or a participant-induced bias following the cancelled randomization process.

  15. Single cell integration of animate form, motion and location in the superior temporal cortex of the macaque monkey.

    PubMed

    Jellema, Tjeerd; Maassen, Gerard; Perrett, David I

    2004-07-01

    This study investigated the cellular mechanisms in the anterior part of the superior temporal sulcus (STSa) that underlie the integration of different features of the same visually perceived animate object. Three visual features were systematically manipulated: form, motion and location. In 58% of a population of cells selectively responsive to the sight of a walking agent, the location of the agent significantly influenced the cell's response. The influence of position was often evident in intricate two- and three-way interactions with the factors form and/or motion. For only one of the 31 cells tested, the response could be explained by just a single factor. For all other cells at least two factors, and for half of the cells (52%) all three factors, played a significant role in controlling responses. Our findings support a reformulation of the Ungerleider and Mishkin model, which envisages a subdivision of the visual processing into a ventral 'what' and a dorsal 'where' stream. We demonstrated that at least part of the temporal cortex ('what' stream) makes ample use of visual spatial information. Our findings open up the prospect of a much more elaborate integration of visual properties of animate objects at the single cell level. Such integration may support the comprehension of animals and their actions.

  16. Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks

    PubMed Central

    Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.

    2015-01-01

    The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502

  17. The influence of motivational and mood states on visual attention: A quantification of systematic differences and casual changes in subjects' focus of attention.

    PubMed

    Hüttermann, Stefanie; Memmert, Daniel

    2015-01-01

    A great number of studies have shown that different motivational and mood states can influence human attentional processes in a variety of ways. Yet, none of these studies have reliably quantified the exact changes of the attentional focus in order to be able to compare attentional performances based on different motivational and mood influences and, beyond that, to evaluate their effectivity. In two studies, we explored subjects' differences in the breadth and distribution of attention as a function of motivational and mood manipulations. In Study 1, motivational orientation was classified in terms of regulatory focus (promotion vs. prevention) and in Study 2, mood was classified in terms of valence (positive vs. negative). Study 1 found a 10% wider distribution of the visual attention in promotion-oriented subjects compared to prevention-oriented ones. The results in Study 2 reveal a widening of the subjects' visual attentional breadth when listening to happy music by 22% and a narrowing by 36% when listening to melancholic music. In total, the findings show that systematic differences and casual changes in the shape and scope of focused attention may be associated with different motivational and mood states.

  18. The Influence of Averageness on Adults' Perceptions of Attractiveness: The Effect of Early Visual Deprivation.

    PubMed

    Vingilis-Jaremko, Larissa; Maurer, Daphne; Rhodes, Gillian; Jeffery, Linda

    2016-08-03

    Adults who missed early visual input because of congenital cataracts later have deficits in many aspects of face processing. Here we investigated whether they make normal judgments of facial attractiveness. In particular, we studied whether their perceptions are affected normally by a face's proximity to the population mean, as is true of typically developing adults, who find average faces to be more attractive than most other faces. We compared the judgments of facial attractiveness of 12 cataract-reversal patients to norms established from 36 adults with normal vision. Participants viewed pairs of adult male and adult female faces that had been transformed 50% toward and 50% away from their respective group averages, and selected which face was more attractive. Averageness influenced patients' judgments of attractiveness, but to a lesser extent than controls. The results suggest that cataract-reversal patients are able to develop a system for representing faces with a privileged position for an average face, consistent with evidence from identity aftereffects. However, early visual experience is necessary to set up the neural architecture necessary for averageness to influence perceptions of attractiveness with its normal potency. © The Author(s) 2016.

  19. Social Experience Does Not Abolish Cultural Diversity in Eye Movements

    PubMed Central

    Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto

    2011-01-01

    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626

  20. Neural Signatures of Stimulus Features in Visual Working Memory—A Spatiotemporal Approach

    PubMed Central

    Jackson, Margaret C.; Klein, Christoph; Mohr, Harald; Shapiro, Kimron L.; Linden, David E. J.

    2010-01-01

    We examined the neural signatures of stimulus features in visual working memory (WM) by integrating functional magnetic resonance imaging (fMRI) and event-related potential data recorded during mental manipulation of colors, rotation angles, and color–angle conjunctions. The N200, negative slow wave, and P3b were modulated by the information content of WM, and an fMRI-constrained source model revealed a progression in neural activity from posterior visual areas to higher order areas in the ventral and dorsal processing streams. Color processing was associated with activity in inferior frontal gyrus during encoding and retrieval, whereas angle processing involved right parietal regions during the delay interval. WM for color–angle conjunctions did not involve any additional neural processes. The finding that different patterns of brain activity underlie WM for color and spatial information is consistent with ideas that the ventral/dorsal “what/where” segregation of perceptual processing influences WM organization. The absence of characteristic signatures of conjunction-related brain activity, which was generally intermediate between the 2 single conditions, suggests that conjunction judgments are based on the coordinated activity of these 2 streams. PMID:19429863

  1. Affect of the unconscious: visually suppressed angry faces modulate our decisions.

    PubMed

    Almeida, Jorge; Pajtas, Petra E; Mahon, Bradford Z; Nakayama, Ken; Caramazza, Alfonso

    2013-03-01

    Emotional and affective processing imposes itself over cognitive processes and modulates our perception of the surrounding environment. In two experiments, we addressed the issue of whether nonconscious processing of affect can take place even under deep states of unawareness, such as those induced by interocular suppression techniques, and can elicit an affective response that can influence our understanding of the surrounding environment. In Experiment 1, participants judged the likeability of an unfamiliar item--a Chinese character--that was preceded by a face expressing a particular emotion (either happy or angry). The face was rendered invisible through an interocular suppression technique (continuous flash suppression; CFS). In Experiment 2, backward masking (BM), a less robust masking technique, was used to render the facial expressions invisible. We found that despite equivalent phenomenological suppression of the visual primes under CFS and BM, different patterns of affective processing were obtained with the two masking techniques. Under BM, nonconscious affective priming was obtained for both happy and angry invisible facial expressions. However, under CFS, nonconscious affective priming was obtained only for angry facial expressions. We discuss an interpretation of this dissociation between affective processing and visual masking techniques in terms of distinct routes from the retina to the amygdala.

  2. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  3. Effects of kinesthetic and cutaneous stimulation during the learning of a viscous force field.

    PubMed

    Rosati, Giulio; Oscari, Fabio; Pacchierotti, Claudio; Prattichizzo, Domenico

    2014-01-01

    Haptic stimulation can help humans learn perceptual motor skills, but the precise way in which it influences the learning process has not yet been clarified. This study investigates the role of the kinesthetic and cutaneous components of haptic feedback during the learning of a viscous curl field, taking also into account the influence of visual feedback. We present the results of an experiment in which 17 subjects were asked to make reaching movements while grasping a joystick and wearing a pair of cutaneous devices. Each device was able to provide cutaneous contact forces through a moving platform. The subjects received visual feedback about joystick's position. During the experiment, the system delivered a perturbation through (1) full haptic stimulation, (2) kinesthetic stimulation alone, (3) cutaneous stimulation alone, (4) altered visual feedback, or (5) altered visual feedback plus cutaneous stimulation. Conditions 1, 2, and 3 were also tested with the cancellation of the visual feedback of position error. Results indicate that kinesthetic stimuli played a primary role during motor adaptation to the viscous field, which is a fundamental premise to motor learning and rehabilitation. On the other hand, cutaneous stimulation alone appeared not to bring significant direct or adaptation effects, although it helped in reducing direct effects when used in addition to kinesthetic stimulation. The experimental conditions with visual cancellation of position error showed slower adaptation rates, indicating that visual feedback actively contributes to the formation of internal models. However, modest learning effects were detected when the visual information was used to render the viscous field.

  4. Disinhibition outside receptive fields in the visual cortex.

    PubMed

    Walker, Gary A; Ohzawa, Izumi; Freeman, Ralph D

    2002-07-01

    By definition, the region outside the classical receptive field (CRF) of a neuron in the visual cortex does not directly activate the cell. However, the response of a neuron can be influenced by stimulation of the surrounding area. In previous work, we showed that this influence is mainly suppressive and that it is generally limited to a local region outside the CRF. In the experiments reported here, we investigate the mechanisms of the suppressive effect. Our approach is to find the position of a grating patch that is most effective in suppressing the response of a cell. We then use a masking stimulus at different contrasts over the grating patch in an attempt to disinhibit the response. We find that suppressive effects may be partially or completely reversed by use of the masking stimulus. This disinhibition suggests that effects from outside the CRF may be local. Although they do not necessarily underlie the perceptual analysis of a figure-ground visual scene, they may provide a substrate for this process.

  5. Dysbindin modulates brain function during visual processing in children.

    PubMed

    Mechelli, A; Viding, E; Kumar, A; Pettersson-Yeo, W; Fusar-Poli, P; Tognin, S; O'Donovan, M C; McGuire, P

    2010-01-01

    Schizophrenia is a neurodevelopmental disorder, and risk genes are thought to act through disruption of brain development. Several genetic studies have identified dystrobrevin binding protein 1 (DTNBP1, also known as dysbindin) as a potential susceptibility gene for schizophrenia, but its impact on brain function is poorly understood. It has been proposed that DTNBP1 may be associated with differences in visual processing. To test this, we examined the impact on visual processing in 61 healthy children aged 10-12 years of a genetic variant in DTNBP1 (rs2619538) that was common to all schizophrenia associated haplotypes in an earlier UK-Irish study. We tested the hypothesis that carriers of the risk allele would show altered occipital cortical function relative to noncarriers. Functional Magnetic Resonance Imaging (fMRI) was used to measure brain responses during a visual matching task. The data were analysed using statistical parametric mapping and statistical inferences were made at p<0.05 (corrected for multiple comparisons). Relative to noncarriers, carriers of the risk allele had greater activation in the lingual, fusiform gyrus and inferior occipital gyri. In these regions DTNBP1 genotype accounted for 19%, 20% and 14% of the inter-individual variance, respectively. Our results suggest that that genetic variation in DTNBP1 is associated with differences in the function of brain areas that mediate visual processing, and that these effects are evident in young children. These findings are consistent with the notion that the DTNBP1 gene influences brain development and can thereby modulate vulnerability to schizophrenia.

  6. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  7. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  8. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  9. Visual representations in science education: The influence of prior knowledge and cognitive load theory on instructional design principles

    NASA Astrophysics Data System (ADS)

    Cook, Michelle Patrick

    2006-11-01

    Visual representations are essential for communicating ideas in the science classroom; however, the design of such representations is not always beneficial for learners. This paper presents instructional design considerations providing empirical evidence and integrating theoretical concepts related to cognitive load. Learners have a limited working memory, and instructional representations should be designed with the goal of reducing unnecessary cognitive load. However, cognitive architecture alone is not the only factor to be considered; individual differences, especially prior knowledge, are critical in determining what impact a visual representation will have on learners' cognitive structures and processes. Prior knowledge can determine the ease with which learners can perceive and interpret visual representations in working memory. Although a long tradition of research has compared experts and novices, more research is necessary to fully explore the expert-novice continuum and maximize the potential of visual representations.

  10. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  11. Looking away from faces: influence of high-level visual processes on saccade programming.

    PubMed

    Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika

    2010-03-30

    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

  12. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  13. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    PubMed

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Does the walking task matter? Influence of different walking conditions on dual-task performances in young and older persons.

    PubMed

    Beurskens, Rainer; Bock, Otmar

    2013-12-01

    Previous literature suggests that age-related deficits of dual-task walking are particularly pronounced with second tasks that require continuous visual processing. Here we evaluate whether the difficulty of the walking task matters as well. To this end, participants were asked to walk along a straight pathway of 20m length in four different walking conditions: (a) wide path and preferred pace; (b) narrow path and preferred pace, (c) wide path and fast pace, (d) obstacled wide path and preferred pace. Each condition was performed concurrently with a task requiring visual processing or fine motor control, and all tasks were also performed alone which allowed us to calculate the dual-task costs (DTC). Results showed that the age-related increase of DTC is substantially larger with the visually demanding than with the motor-demanding task, more so when walking on a narrow or obstacled path. We attribute these observations to the fact that visual scanning of the environment becomes more crucial when walking in difficult terrains: the higher visual demand of those conditions accentuates the age-related deficits in coordinating them with a visual non-walking task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Domain specificity versus expertise: factors influencing distinct processing of faces.

    PubMed

    Carmel, David; Bentin, Shlomo

    2002-02-01

    To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.

  16. Visual processing of emotional expressions in mixed anxious-depressed subclinical state: an event-related potential study on a female sample.

    PubMed

    Rossignol, M; Philippot, P; Crommelinck, M; Campanella, S

    2008-10-01

    Controversy remains about the existence and the nature of a specific bias in emotional facial expression processing in mixed anxious-depressed state (MAD). Event-related potentials were recorded in the following three types of groups defined by the Spielberger state and trait anxiety inventory (STAI) and the Beck depression inventory (BDI): a group of anxious participants (n=12), a group of participants with depressive and anxious tendencies (n=12), and a control group (n=12). Participants were confronted with a visual oddball task in which they had to detect, as quickly as possible, deviant faces amongst a train of standard neutral faces. Deviant stimuli changed either on identity, or on emotion (happy or sad expression). Anxiety facilitated emotional processing and the two anxious groups produced quicker responses than control participants; these effects were correlated with an earlier decisional wave (P3b) for anxious participants. Mixed anxious-depressed participants showed enhanced visual processing of deviant stimuli and produced higher amplitude in attentional complex (N2b/P3a), both for identity and emotional trials. P3a was also particularly increased for emotional faces in this group. Anxious state mainly influenced later decision processes (shorter latency of P3b), whereas mixed anxious-depressed state acted on earlier steps of emotional processing (enhanced N2b/P3a complex). Mixed anxious-depressed individuals seemed more reactive to any visual change, particularly emotional change, without displaying any valence bias.

  17. The amygdala and basal forebrain as a pathway for motivationally guided attention.

    PubMed

    Peck, Christopher J; Salzman, C Daniel

    2014-10-08

    Visual stimuli associated with rewards attract spatial attention. Neurophysiological mechanisms that mediate this process must register both the motivational significance and location of visual stimuli. Recent neurophysiological evidence indicates that the amygdala encodes information about both of these parameters. Furthermore, the firing rate of amygdala neurons predicts the allocation of spatial attention. One neural pathway through which the amygdala might influence attention involves the intimate and bidirectional connections between the amygdala and basal forebrain (BF), a brain area long implicated in attention. Neurons in the rhesus monkey amygdala and BF were therefore recorded simultaneously while subjects performed a detection task in which the stimulus-reward associations of visual stimuli modulated spatial attention. Neurons in BF were spatially selective for reward-predictive stimuli, much like the amygdala. The onset of reward-predictive signals in each brain area suggested different routes of processing for reward-predictive stimuli appearing in the ipsilateral and contralateral fields. Moreover, neurons in the amygdala, but not BF, tracked trial-to-trial fluctuations in spatial attention. These results suggest that the amygdala and BF could play distinct yet inter-related roles in influencing attention elicited by reward-predictive stimuli. Copyright © 2014 the authors 0270-6474/14/3413757-11$15.00/0.

  18. Potentiation of the early visual response to learned danger signals in adults and adolescents

    PubMed Central

    Howsley, Philippa; Jordan, Jeff; Johnston, Pat

    2015-01-01

    The reinforcing effects of aversive outcomes on avoidance behaviour are well established. However, their influence on perceptual processes is less well explored, especially during the transition from adolescence to adulthood. Using electroencephalography, we examined whether learning to actively or passively avoid harm can modulate early visual responses in adolescents and adults. The task included two avoidance conditions, active and passive, where two different warning stimuli predicted the imminent, but avoidable, presentation of an aversive tone. To avoid the aversive outcome, participants had to learn to emit an action (active avoidance) for one of the warning stimuli and omit an action for the other (passive avoidance). Both adults and adolescents performed the task with a high degree of accuracy. For both adolescents and adults, increased N170 event-related potential amplitudes were found for both the active and the passive warning stimuli compared with control conditions. Moreover, the potentiation of the N170 to the warning stimuli was stable and long lasting. Developmental differences were also observed; adolescents showed greater potentiation of the N170 component to danger signals. These findings demonstrate, for the first time, that learned danger signals in an instrumental avoidance task can influence early visual sensory processes in both adults and adolescents. PMID:24652856

  19. The influence of visual and auditory information on the perception of speech and non-speech oral movements in patients with left hemisphere lesions.

    PubMed

    Schmid, Gabriele; Thielmann, Anke; Ziegler, Wolfram

    2009-03-01

    Patients with lesions of the left hemisphere often suffer from oral-facial apraxia, apraxia of speech, and aphasia. In these patients, visual features often play a critical role in speech and language therapy, when pictured lip shapes or the therapist's visible mouth movements are used to facilitate speech production and articulation. This demands audiovisual processing both in speech and language treatment and in the diagnosis of oral-facial apraxia. The purpose of this study was to investigate differences in audiovisual perception of speech as compared to non-speech oral gestures. Bimodal and unimodal speech and non-speech items were used and additionally discordant stimuli constructed, which were presented for imitation. This study examined a group of healthy volunteers and a group of patients with lesions of the left hemisphere. Patients made substantially more errors than controls, but the factors influencing imitation accuracy were more or less the same in both groups. Error analyses in both groups suggested different types of representations for speech as compared to the non-speech domain, with speech having a stronger weight on the auditory modality and non-speech processing on the visual modality. Additionally, this study was able to show that the McGurk effect is not limited to speech.

  20. Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns

    PubMed Central

    de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.

    2016-01-01

    The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633

  1. On the influence of typicality and age of acquisition on semantic processing: Diverging evidence from behavioural and ERP responses.

    PubMed

    Räling, Romy; Holzgrefe-Lang, Julia; Schröder, Astrid; Wartenburger, Isabell

    2015-08-01

    Various behavioural studies show that semantic typicality (TYP) and age of acquisition (AOA) of a specific word influence processing time and accuracy during the performance of lexical-semantic tasks. This study examines the influence of TYP and AOA on semantic processing at behavioural (response times and accuracy data) and electrophysiological levels using an auditory category-member-verification task. Reaction time data reveal independent TYP and AOA effects, while in the accuracy data and the event-related potentials predominantly effects of TYP can be found. The present study thus confirms previous findings and extends evidence found in the visual modality to the auditory modality. A modality-independent influence on semantic word processing is manifested. However, with regard to the influence of AOA, the diverging results raise questions on the origin of AOA effects as well as on the interpretation of offline and online data. Hence, results will be discussed against the background of recent theories on N400 correlates in semantic processing. In addition, an argument in favour of a complementary use of research techniques will be made. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Advancing Water Science through Data Visualization

    NASA Astrophysics Data System (ADS)

    Li, X.; Troy, T.

    2014-12-01

    As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water science, they can absorb the knowledge about water indirectly and incite the awareness of water problems.

  3. Network model of top-down influences on local gain and contextual interactions in visual cortex.

    PubMed

    Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D

    2013-10-22

    The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.

  4. Neuro-cognitive mechanisms of conscious and unconscious visual perception: From a plethora of phenomena to general principles

    PubMed Central

    Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael

    2011-01-01

    Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669

  5. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    PubMed

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  6. The role of aging in intra-item and item-context binding processes in visual working memory.

    PubMed

    Peterson, Dwight J; Naveh-Benjamin, Moshe

    2016-11-01

    Aging is accompanied by declines in both working memory and long-term episodic memory processes. Specifically, important age-related memory deficits are characterized by performance impairments exhibited by older relative to younger adults when binding distinct components into a single integrated representation, despite relatively intact memory for the individual components. While robust patterns of age-related binding deficits are prevalent in studies of long-term episodic memory, observations of such deficits in visual working memory (VWM) may depend on the specific type of binding process being examined. For instance, a number of studies indicate that processes involved in item-context binding of items to occupied spatial locations within visual working memory are impaired in older relative to younger adults. Other findings suggest that intra-item binding of visual surface features (e.g., color, shape), compared to memory for single features, within visual working memory, remains relatively intact. Here, we examined each of these binding processes in younger and older adults under both optimal conditions (i.e., no concurrent load) and concurrent load (e.g., articulatory suppression, backward counting). Experiment 1 revealed an age-related intra-item binding deficit for surface features under no concurrent load but not when articulatory suppression was required. In contrast, in Experiments 2 and 3, we observed an age-related item-context binding deficit regardless of the level of concurrent load. These findings reveal that the influence of concurrent load on distinct binding processes within VWM, potentially those supported by rehearsal, is an important factor mediating the presence or absence of age-related binding deficits within VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  8. Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex

    PubMed Central

    Vaden, Ryan J.; Visscher, Kristina M.

    2015-01-01

    Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806

  9. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  10. Two memories for geographical slant: separation and interdependence of action and awareness

    NASA Technical Reports Server (NTRS)

    Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    1998-01-01

    The present study extended previous findings of geographical slant perception, in which verbal judgments of the incline of hills were greatly overestimated but motoric (haptic) adjustments were much more accurate. In judging slant from memory following a brief or extended time delay, subjects' verbal judgments were greater than those given when viewing hills. Motoric estimates differed depending on the length of the delay and place of response. With a short delay, motoric adjustments made in the proximity of the hill did not differ from those evoked during perception. When given a longer delay or when taken away from the hill, subjects' motoric responses increased along with the increase in verbal reports. These results suggest two different memorial influences on action. With a short delay at the hill, memory for visual guidance is separate from the explicit memory informing the conscious response. With short or long delays away from the hill, short-term visual guidance memory no longer persists, and both motor and verbal responses are driven by an explicit representation. These results support recent research involving visual guidance from memory, where actions become influenced by conscious awareness, and provide evidence for communication between the "what" and "how" visual processing systems.

  11. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  12. The Network Architecture of Cortical Processing in Visuo-spatial Reasoning

    PubMed Central

    Shokri-Kojori, Ehsan; Motes, Michael A.; Rypma, Bart; Krawczyk, Daniel C.

    2012-01-01

    Reasoning processes have been closely associated with prefrontal cortex (PFC), but specifically emerge from interactions among networks of brain regions. Yet it remains a challenge to integrate these brain-wide interactions in identifying the flow of processing emerging from sensory brain regions to abstract processing regions, particularly within PFC. Functional magnetic resonance imaging data were collected while participants performed a visuo-spatial reasoning task. We found increasing involvement of occipital and parietal regions together with caudal-rostral recruitment of PFC as stimulus dimensions increased. Brain-wide connectivity analysis revealed that interactions between primary visual and parietal regions predominantly influenced activity in frontal lobes. Caudal-to-rostral influences were found within left-PFC. Right-PFC showed evidence of rostral-to-caudal connectivity in addition to relatively independent influences from occipito-parietal cortices. In the context of hierarchical views of PFC organization, our results suggest that a caudal-to-rostral flow of processing may emerge within PFC in reasoning tasks with minimal top-down deductive requirements. PMID:22624092

  13. The network architecture of cortical processing in visuo-spatial reasoning.

    PubMed

    Shokri-Kojori, Ehsan; Motes, Michael A; Rypma, Bart; Krawczyk, Daniel C

    2012-01-01

    Reasoning processes have been closely associated with prefrontal cortex (PFC), but specifically emerge from interactions among networks of brain regions. Yet it remains a challenge to integrate these brain-wide interactions in identifying the flow of processing emerging from sensory brain regions to abstract processing regions, particularly within PFC. Functional magnetic resonance imaging data were collected while participants performed a visuo-spatial reasoning task. We found increasing involvement of occipital and parietal regions together with caudal-rostral recruitment of PFC as stimulus dimensions increased. Brain-wide connectivity analysis revealed that interactions between primary visual and parietal regions predominantly influenced activity in frontal lobes. Caudal-to-rostral influences were found within left-PFC. Right-PFC showed evidence of rostral-to-caudal connectivity in addition to relatively independent influences from occipito-parietal cortices. In the context of hierarchical views of PFC organization, our results suggest that a caudal-to-rostral flow of processing may emerge within PFC in reasoning tasks with minimal top-down deductive requirements.

  14. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  15. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  16. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  17. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors

    PubMed Central

    Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513

  18. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    PubMed

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  19. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    PubMed

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  20. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  1. Pupil size directly modulates the feedforward response in human primary visual cortex independently of attention.

    PubMed

    Bombeke, Klaas; Duthoo, Wout; Mueller, Sven C; Hopf, Jens-Max; Boehler, C Nico

    2016-02-15

    Controversy revolves around the question of whether psychological factors like attention and emotion can influence the initial feedforward response in primary visual cortex (V1). Although traditionally, the electrophysiological correlate of this response in humans (the C1 component) has been found to be unaltered by psychological influences, a number of recent studies have described attentional and emotional modulations. Yet, research into psychological effects on the feedforward V1 response has neglected possible direct contributions of concomitant pupil-size modulations, which are known to also occur under various conditions of attentional load and emotional state. Here we tested the hypothesis that such pupil-size differences themselves directly affect the feedforward V1 response. We report data from two complementary experiments, in which we used procedures that modulate pupil size without differences in attentional load or emotion while simultaneously recording pupil-size and EEG data. Our results confirm that pupil size indeed directly influences the feedforward V1 response, showing an inverse relationship between pupil size and early V1 activity. While it is unclear in how far this effect represents a functionally-relevant adaptation, it identifies pupil-size differences as an important modulating factor of the feedforward response of V1 and could hence represent a confounding variable in research investigating the neural influence of psychological factors on early visual processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words

    ERIC Educational Resources Information Center

    Baayen, R. H.; Feldman, L. B.; Schreuder, R.

    2006-01-01

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…

  3. Inhibition in Dot Comparison Tasks

    ERIC Educational Resources Information Center

    Clayton, Sarah; Gilmore, Camilla

    2015-01-01

    Dot comparison tasks are commonly used to index an individual's Approximate Number System (ANS) acuity, but the cognitive processes involved in completing these tasks are poorly understood. Here, we investigated how factors including numerosity ratio, set size and visual cues influence task performance. Forty-four children aged 7-9 years completed…

  4. What Do Graded Effects of Semantic Transparency Reveal about Morphological Processing?

    ERIC Educational Resources Information Center

    Feldman, Laurie Beth; Soltano, Emily G.; Pastizzo, Matthew J.; Francis, Sarah E.

    2004-01-01

    We examined the influence of semantic transparency on morphological facilitation in English in three lexical decision experiments. Decision latencies to visual targets (e.g., CASUALNESS) were faster after semantically transparent (e.g., CASUALLY) than semantically opaque (e.g., CASUALTY) primes whether primes were auditory and presented…

  5. Influence of detergents on water drift in cooling towers

    NASA Astrophysics Data System (ADS)

    Vitkovicova, Rut

    An influence of detergents on the water drift from the cooling tower was experimentally investigated. For this experimental measurements was used a model cooling tower, especially an experimental aerodynamic line, which is specially designed for the measurement and monitoring of processes taking place around the eliminators of the liquid phase. The effect of different concentrations of detergent in the cooling water on the drift of water droplets from a commonly used type eliminator was observed with visualization methods.

  6. Side Effects of Being Blue: Influence of Sad Mood on Visual Statistical Learning

    PubMed Central

    Bertels, Julie; Demoulin, Catherine; Franco, Ana; Destrebecqz, Arnaud

    2013-01-01

    It is well established that mood influences many cognitive processes, such as learning and executive functions. Although statistical learning is assumed to be part of our daily life, as mood does, the influence of mood on statistical learning has never been investigated before. In the present study, a sad vs. neutral mood was induced to the participants through the listening of stories while they were exposed to a stream of visual shapes made up of the repeated presentation of four triplets, namely sequences of three shapes presented in a fixed order. Given that the inter-stimulus interval was held constant within and between triplets, the only cues available for triplet segmentation were the transitional probabilities between shapes. Direct and indirect measures of learning taken either immediately or 20 minutes after the exposure/mood induction phase revealed that participants learned the statistical regularities between shapes. Interestingly, although participants from the sad and neutral groups performed similarly in these tasks, subjective measures (confidence judgments taken after each trial) revealed that participants who experienced the sad mood induction showed increased conscious access to their statistical knowledge. These effects were not modulated by the time delay between the exposure/mood induction and the test phases. These results are discussed within the scope of the robustness principle and the influence of negative affects on processing style. PMID:23555797

  7. Reward speeds up and increases consistency of visual selective attention: a lifespan comparison.

    PubMed

    Störmer, Viola; Eppinger, Ben; Li, Shu-Chen

    2014-06-01

    Children and older adults often show less favorable reward-based learning and decision making, relative to younger adults. It is unknown, however, whether reward-based processes that influence relatively early perceptual and attentional processes show similar lifespan differences. In this study, we investigated whether stimulus-reward associations affect selective visual attention differently across the human lifespan. Children, adolescents, younger adults, and older adults performed a visual search task in which the target colors were associated with either high or low monetary rewards. We discovered that high reward value speeded up response times across all four age groups, indicating that reward modulates attentional selection across the lifespan. This speed-up in response time was largest in younger adults, relative to the other three age groups. Furthermore, only younger adults benefited from high reward value in increasing response consistency (i.e., reduction of trial-by-trial reaction time variability). Our findings suggest that reward-based modulations of relatively early and implicit perceptual and attentional processes are operative across the lifespan, and the effects appear to be greater in adulthood. The age-specific effect of reward on reducing intraindividual response variability in younger adults likely reflects mechanisms underlying the development and aging of reward processing, such as lifespan age differences in the efficacy of dopaminergic modulation. Overall, the present results indicate that reward shapes visual perception across different age groups by biasing attention to motivationally salient events.

  8. The differential effect of trigeminal vs. peripheral pain stimulation on visual processing and memory encoding is influenced by pain-related fear.

    PubMed

    Schmidt, K; Forkmann, K; Sinke, C; Gratz, M; Bitz, A; Bingel, U

    2016-07-01

    Compared to peripheral pain, trigeminal pain elicits higher levels of fear, which is assumed to enhance the interruptive effects of pain on concomitant cognitive processes. In this fMRI study we examined the behavioral and neural effects of trigeminal (forehead) and peripheral (hand) pain on visual processing and memory encoding. Cerebral activity was measured in 23 healthy subjects performing a visual categorization task that was immediately followed by a surprise recognition task. During the categorization task subjects received concomitant noxious electrical stimulation on the forehead or hand. Our data show that fear ratings were significantly higher for trigeminal pain. Categorization and recognition performance did not differ between pictures that were presented with trigeminal and peripheral pain. However, object categorization in the presence of trigeminal pain was associated with stronger activity in task-relevant visual areas (lateral occipital complex, LOC), memory encoding areas (hippocampus and parahippocampus) and areas implicated in emotional processing (amygdala) compared to peripheral pain. Further, individual differences in neural activation between the trigeminal and the peripheral condition were positively related to differences in fear ratings between both conditions. Functional connectivity between amygdala and LOC was increased during trigeminal compared to peripheral painful stimulation. Fear-driven compensatory resource activation seems to be enhanced for trigeminal stimuli, presumably due to their exceptional biological relevance. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Illustration of cross flow of polystyrene melts through a coathanger die

    NASA Astrophysics Data System (ADS)

    Schöppner, V.; Henke, B.

    2015-05-01

    To design an optimal coathanger die with a uniform flow rate distribution and low pressure drop, it is essential to understand the flow conditions in the die. This is important because the quality of the product is influenced by the flow velocity and the flow rate distribution. In extrusion dies, cross flows also occur in addition to the main flow, which flow perpendicular to the main flow. This results in pressure gradients in the extrusion direction, which have an influence on flow distribution and pressure drop in the die. In recent decades, quantitative representation and analysis of physical flow processes have made considerable progress in predicting the weather, developing drive technologies and designing aircraft using simulation methods and lab trials. Using the flow-line method, the flow is analyzed in flat film extrusion dies with a rectangular cross-section, in particular cross flows. The simplest method to visualize the flow is based on the measurement of obstacle orientation in the flow field by adding individual particles. A near-surface flow field can be visualized by using wool or textile yarns. By sticking thin, frayed at the ends of strands of wool surface that is to be examined cross flows, near-wall profiles of the flow and vortex and separation regions can be visualized. A further possibility is to add glass fibers and analyze the fiber orientation by microscopy and x-ray analysis. In this paper the influence of process parameters (e.g. melt temperatures and throughput) on cross flow and fiber orientation is described.

  10. Reading skill and word skipping: Implications for visual and linguistic accounts of word skipping.

    PubMed

    Eskenazi, Michael A; Folk, Jocelyn R

    2015-11-01

    We investigated whether high-skill readers skip more words than low-skill readers as a result of parafoveal processing differences based on reading skill. We manipulated foveal load and word length, two variables that strongly influence word skipping, and measured reading skill using the Nelson-Denny Reading Test. We found that reading skill did not influence the probability of skipping five-letter words, but low-skill readers were less likely to skip three-letter words when foveal load was high. Thus, reading skill is likely to influence word skipping when the amount of information in the parafovea falls within the word identification span. We interpret the data in the context of visual-based (extended optimal viewing position model) and linguistic based (E-Z Reader model) accounts of word skipping. The models make different predictions about how and why a word and skipped; however, the data indicate that both models should take into account the fact that different factors influence skipping rates for high- and low-skill readers. (c) 2015 APA, all rights reserved).

  11. Watch what you type: the role of visual feedback from the screen and hands in skilled typewriting.

    PubMed

    Snyder, Kristy M; Logan, Gordon D; Yamaguchi, Motonori

    2015-01-01

    Skilled typing is controlled by two hierarchically structured processing loops (Logan & Crump, 2011): The outer loop, which produces words, commands the inner loop, which produces keystrokes. Here, we assessed the interplay between the two loops by investigating how visual feedback from the screen (responses either were or were not echoed on the screen) and the hands (the hands either were or were not covered with a box) influences the control of skilled typing. Our results indicated, first, that the reaction time of the first keystroke was longer when responses were not echoed than when they were. Also, the interkeystroke interval (IKSI) was longer when the hands were covered than when they were visible, and the IKSI for responses that were not echoed was longer when explicit error monitoring was required (Exp. 2) than when it was not required (Exp. 1). Finally, explicit error monitoring was more accurate when response echoes were present than when they were absent, and implicit error monitoring (i.e., posterror slowing) was not influenced by visual feedback from the screen or the hands. These findings suggest that the outer loop adjusts the inner-loop timing parameters to compensate for reductions in visual feedback. We suggest that these adjustments are preemptive control strategies designed to execute keystrokes more cautiously when visual feedback from the hands is absent, to generate more cautious motor programs when visual feedback from the screen is absent, and to enable enough time for the outer loop to monitor keystrokes when visual feedback from the screen is absent and explicit error reports are required.

  12. Direct and indirect effects of attention and visual function on gait impairment in Parkinson's disease: influence of task and turning.

    PubMed

    Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn

    2017-07-01

    Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Synaptogenesis in visual cortex of normal and preterm monkeys: evidence for intrinsic regulation of synaptic overproduction.

    PubMed Central

    Bourgeois, J P; Jastreboff, P J; Rakic, P

    1989-01-01

    We used quantitative electron microscopy to determine the effect of precocious visual experience on the time course, magnitude, and pattern of perinatal synaptic overproduction in the primary visual cortex of the rhesus monkey. Fetuses were delivered by caesarean section 3 weeks before term, exposed to normal light intensity and day/night cycles, and killed within the first postnatal month, together with age-matched controls that were delivered at term. We found that premature visual stimulation does not affect the rate of synaptic accretion and overproduction. Both of these processes proceed in relation to the time of conception rather than to the time of delivery. In contrast, the size, type, and laminar distribution of synapses were significantly different between preterm and control infants. The changes and differences in these parameters correlate with the duration of visual stimulation and become less pronounced with age. If visual experience in infancy influences the maturation of the visual cortex, it must do so predominantly by strengthening, modifying, and/or eliminating synapses that have already been formed, rather than by regulating the rate of synapse production. Images PMID:2726773

  14. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  15. Modulation of human extrastriate visual processing by selective attention to colours and words.

    PubMed

    Nobre, A C; Allison, T; McCarthy, G

    1998-07-01

    The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.

  16. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  17. Enhanced dimension-specific visual working memory in grapheme–color synesthesia☆

    PubMed Central

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-01-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme–color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. PMID:23892185

  18. Early visual ERPs are influenced by individual emotional skills

    PubMed Central

    Roux, Sylvie; Batty, Magali

    2014-01-01

    Processing information from faces is crucial to understanding others and to adapting to social life. Many studies have investigated responses to facial emotions to provide a better understanding of the processes and the neural networks involved. Moreover, several studies have revealed abnormalities of emotional face processing and their neural correlates in affective disorders. The aim of this study was to investigate whether early visual event-related potentials (ERPs) are affected by the emotional skills of healthy adults. Unfamiliar faces expressing the six basic emotions were presented to 28 young adults while recording visual ERPs. No specific task was required during the recording. Participants also completed the Social Skills Inventory (SSI) which measures social and emotional skills. The results confirmed that early visual ERPs (P1, N170) are affected by the emotions expressed by a face and also demonstrated that N170 and P2 are correlated to the emotional skills of healthy subjects. While N170 is sensitive to the subject’s emotional sensitivity and expressivity, P2 is modulated by the ability of the subjects to control their emotions. We therefore suggest that N170 and P2 could be used as individual markers to assess strengths and weaknesses in emotional areas and could provide information for further investigations of affective disorders. PMID:23720573

  19. Characterizing the effects of feature salience and top-down attention in the early visual system.

    PubMed

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-07-01

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.

  20. Influence of early attentional modulation on working memory

    PubMed Central

    Gazzaley, Adam

    2011-01-01

    It is now established that attention influences working memory (WM) at multiple processing stages. This liaison between attention and WM poses several interesting empirical questions. Notably, does attention impact WM via its influences on early perceptual processing? If so, what are the critical factors at play in this attention-perception-WM interaction. I review recent data from our laboratory utilizing a variety of techniques (electroencephalography (EEG), functional MRI (fMRI) and transcranial magnetic stimulation (TMS)), stimuli (features and complex objects), novel experimental paradigms, and research populations (younger and older adults), which converge to support the conclusion that top-down modulation of visual cortical activity at early perceptual processing stages (100–200 ms after stimulus onset) impacts subsequent WM performance. Factors that affect attentional control at this stage include cognitive load, task practice, perceptual training, and aging. These developments highlight the complex and dynamic relationships among perception, attention, and memory. PMID:21184764

  1. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Object recognition contributions to figure-ground organization: operations on outlines and subjective contours.

    PubMed

    Peterson, M A; Gibson, B S

    1994-11-01

    In previous research, replicated here, we found that some object recognition processes influence figure-ground organization. We have proposed that these object recognition processes operate on edges (or contours) detected early in visual processing, rather than on regions. Consistent with this proposal, influences from object recognition on figure-ground organization were previously observed in both pictures and stereograms depicting regions of different luminance, but not in random-dot stereograms, where edges arise late in processing (Peterson & Gibson, 1993). In the present experiments, we examined whether or not two other types of contours--outlines and subjective contours--enable object recognition influences on figure-ground organization. For both types of contours we observed a pattern of effects similar to that originally obtained with luminance edges. The results of these experiments are valuable for distinguishing between alternative views of the mechanisms mediating object recognition influences on figure-ground organization. In addition, in both Experiments 1 and 2, fixated regions were seen as figure longer than nonfixated regions, suggesting that fixation location must be included among the variables relevant to figure-ground organization.

  3. Feature-selective attention in healthy old age: a selective decline in selective attention?

    PubMed

    Quigley, Cliodhna; Müller, Matthias M

    2014-02-12

    Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.

  4. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  5. Developmental changes in the neural influence of sublexical information on semantic processing.

    PubMed

    Lee, Shu-Hui; Booth, James R; Chou, Tai-Li

    2015-07-01

    Functional magnetic resonance imaging (fMRI) was used to examine the developmental changes in a group of normally developing children (aged 8-12) and adolescents (aged 13-16) during semantic processing. We manipulated association strength (i.e. a global reading unit) and semantic radical (i.e. a local reading unit) to explore the interaction of lexical and sublexical semantic information in making semantic judgments. In the semantic judgment task, two types of stimuli were used: visually-similar (i.e. shared a semantic radical) versus visually-dissimilar (i.e. did not share a semantic radical) character pairs. Participants were asked to indicate if two Chinese characters, arranged according to association strength, were related in meaning. The results showed greater developmental increases in activation in left angular gyrus (BA 39) in the visually-similar compared to the visually-dissimilar pairs for the strong association. There were also greater age-related increases in angular gyrus for the strong compared to weak association in the visually-similar pairs. Both of these results suggest that shared semantics at the sublexical level facilitates the integration of overlapping features at the lexical level in older children. In addition, there was a larger developmental increase in left posterior middle temporal gyrus (BA 21) for the weak compared to strong association in the visually-dissimilar pairs, suggesting conflicting sublexical information placed greater demands on access to lexical representations in the older children. All together, these results suggest that older children are more sensitive to sublexical information when processing lexical representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Alpha oscillations correlate with the successful inhibition of unattended stimuli.

    PubMed

    Händel, Barbara F; Haarmeier, Thomas; Jensen, Ole

    2011-09-01

    Because the human visual system is continually being bombarded with inputs, it is necessary to have effective mechanisms for filtering out irrelevant information. This is partly achieved by the allocation of attention, allowing the visual system to process relevant input while blocking out irrelevant input. What is the physiological substrate of attentional allocation? It has been proposed that alpha activity reflects functional inhibition. Here we asked if inhibition by alpha oscillations has behavioral consequences for suppressing the perception of unattended input. To this end, we investigated the influence of alpha activity on motion processing in two attentional conditions using magneto-encephalography. The visual stimuli used consisted of two random-dot kinematograms presented simultaneously to the left and right visual hemifields. Subjects were cued to covertly attend the left or right kinematogram. After 1.5 sec, a second cue tested whether subjects could report the direction of coherent motion in the attended (80%) or unattended hemifield (20%). Occipital alpha power was higher contralateral to the unattended side than to the attended side, thus suggesting inhibition of the unattended hemifield. Our key finding is that this alpha lateralization in the 20% invalidly cued trials did correlate with the perception of motion direction: Subjects with pronounced alpha lateralization were worse at detecting motion direction in the unattended hemifield. In contrast, lateralization did not correlate with visual discrimination in the attended visual hemifield. Our findings emphasize the suppressive nature of alpha oscillations and suggest that processing of inputs outside the field of attention is weakened by means of increased alpha activity.

  7. Language and vertical space: on the automaticity of language action interconnections.

    PubMed

    Dudschig, Carolin; de la Vega, Irmgard; De Filippis, Monica; Kaup, Barbara

    2014-09-01

    Grounded models of language processing propose a strong connection between language and sensorimotor processes (Barsalou, 1999, 2008; Glenberg & Kaschak, 2002). However, it remains unclear how functional and automatic these connections are for understanding diverse sets of words (Ansorge, Kiefer, Khalid, Grassl, & König, 2010). Here, we investigate whether words referring to entities with a typical location in the upper or lower visual field (e.g., sun, ground) automatically influence subsequent motor responses even when language-processing levels are kept minimal. The results show that even subliminally presented words influence subsequent actions, as can be seen in a reversed compatibility effect. These finding have several implications for grounded language processing models. Specifically, these results suggest that language-action interconnections are not only the result of strategic language processes, but already play an important role during pre-attentional language processing stages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model

    PubMed Central

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806

  9. Bottlenecks of motion processing during a visual glance: the leaky flask model.

    PubMed

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.

  10. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  11. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  12. Light and dark adaptation of visually perceived eye level controlled by visual pitch.

    PubMed

    Matin, L; Li, W

    1995-01-01

    The pitch of a visual field systematically influences the elevation at which a monocularly viewing subject sets a target so as to appear at visually perceived eye level (VPEL). The deviation of the setting from true eye level average approximately 0.6 times the angle of pitch while viewing a fully illuminated complexly structured visual field and is only slightly less with one or two pitched-from-vertical lines in a dark field (Matin & Li, 1994a). The deviation of VPEL from baseline following 20 min of dark adaptation reaches its full value less than 1 min after the onset of illumination of the pitched visual field and decays exponentially in darkness following 5 min of exposure to visual pitch, either 30 degrees topbackward or 20 degrees topforward. The magnitude of the VPEL deviation measured with the dark-adapted right eye following left-eye exposure to pitch was 85% of the deviation that followed pitch exposure of the right eye itself. Time constants for VPEL decay to the dark baseline were the same for same-eye and cross-adaptation conditions and averaged about 4 min. The time constants for decay during dark adaptation were somewhat smaller, and the change during dark adaptation extended over a 16% smaller range following the viewing of the dim two-line pitched-from-vertical stimulus than following the viewing of the complex field. The temporal course of light and dark adaptation of VPEL is virtually identical to the course of light and dark adaptation of the scotopic luminance threshold following exposure to the same luminance. We suggest that, following rod stimulation along particular retinal orientations by portions of the pitched visual field, the storage of the adaptation process resides in the retinogeniculate system and is manifested in the focal system as a change in luminance threshold and in the ambient system as a change in VPEL. The linear model previously developed to account for VPEL, which was based on the interaction of influences from the pitched visual field and extraretinal influences from the body-referenced mechanism, was employed to incorporate the effects of adaptation. Connections between VPEL adaptation and other cases of perceptual adaptation of visual direction are described.

  13. The levels of perceptual processing and the neural correlates of increasing subjective visibility.

    PubMed

    Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel

    2017-10-01

    According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Bayesian networks and information theory for audio-visual perception modeling.

    PubMed

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  15. Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition

    PubMed Central

    Chakraborty, Anya; Chakrabarti, Bhismadev

    2018-01-01

    We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554

  16. Processing of proprioceptive and vestibular body signals and self-transcendence in Ashtanga yoga practitioners.

    PubMed

    Fiori, Francesca; David, Nicole; Aglioti, Salvatore M

    2014-01-01

    In the rod and frame test (RFT), participants are asked to set a tilted visual linear marker (i.e., a rod), embedded in a square, to the subjective vertical, irrespective of the surrounding frame. People not influenced by the frame tilt are defined as field-independent, while people biased in their rod verticality perception are field-dependent. Performing RFT requires the integration of proprioceptive, vestibular and visual signals with the latter accounting for field-dependency. Studies indicate that motor experts in body-related, balance-improving disciplines tend to be field-independent, i.e., better at verticality perception, suggesting that proprioceptive and vestibular expertise acquired by such exercise may weaken the influence of irrelevant visual signals. What remains unknown is whether the effect of body-related expertise in weighting perceptual information might also be mediated by personality traits, in particular those indexing self-focusing abilities. To explore this issue, we tested field-dependency in a class of body experts, namely yoga practitioners and in non-expert participants. Moreover we explored any link between performance on RFT and self-transcendence (ST), a complex personality construct, which refers to tendency to experience spiritual feelings and ideas. As expected, yoga practitioners (i) were more accurate in assessing the rod's verticality on the RFT, and (ii) expressed significantly higher ST. Interestingly, the performance in these two tests was negatively correlated. More specifically, when asked to provide verticality judgments, highly self-transcendent yoga practitioners were significantly less influenced by a misleading visual context. Our results suggest that being highly self-transcendent may enable yoga practitioners to optimize verticality judgment tasks by relying more on internal (vestibular and proprioceptive) signals coming from their own body, rather than on exteroceptive, visual cues.

  17. Processing of proprioceptive and vestibular body signals and self-transcendence in Ashtanga yoga practitioners

    PubMed Central

    Fiori, Francesca; David, Nicole; Aglioti, Salvatore M.

    2014-01-01

    In the rod and frame test (RFT), participants are asked to set a tilted visual linear marker (i.e., a rod), embedded in a square, to the subjective vertical, irrespective of the surrounding frame. People not influenced by the frame tilt are defined as field-independent, while people biased in their rod verticality perception are field-dependent. Performing RFT requires the integration of proprioceptive, vestibular and visual signals with the latter accounting for field-dependency. Studies indicate that motor experts in body-related, balance-improving disciplines tend to be field-independent, i.e., better at verticality perception, suggesting that proprioceptive and vestibular expertise acquired by such exercise may weaken the influence of irrelevant visual signals. What remains unknown is whether the effect of body-related expertise in weighting perceptual information might also be mediated by personality traits, in particular those indexing self-focusing abilities. To explore this issue, we tested field-dependency in a class of body experts, namely yoga practitioners and in non-expert participants. Moreover we explored any link between performance on RFT and self-transcendence (ST), a complex personality construct, which refers to tendency to experience spiritual feelings and ideas. As expected, yoga practitioners (i) were more accurate in assessing the rod's verticality on the RFT, and (ii) expressed significantly higher ST. Interestingly, the performance in these two tests was negatively correlated. More specifically, when asked to provide verticality judgments, highly self-transcendent yoga practitioners were significantly less influenced by a misleading visual context. Our results suggest that being highly self-transcendent may enable yoga practitioners to optimize verticality judgment tasks by relying more on internal (vestibular and proprioceptive) signals coming from their own body, rather than on exteroceptive, visual cues. PMID:25278866

  18. Effects of visual information regarding allocentric processing in haptic parallelity matching.

    PubMed

    Van Mier, Hanneke I

    2013-10-01

    Research has revealed that haptic perception of parallelity deviates from physical reality. Large and systematic deviations have been found in haptic parallelity matching most likely due to the influence of the hand-centered egocentric reference frame. Providing information that increases the influence of allocentric processing has been shown to improve performance on haptic matching. In this study allocentric processing was stimulated by providing informative vision in haptic matching tasks that were performed using hand- and arm-centered reference frames. Twenty blindfolded participants (ten men, ten women) explored the orientation of a reference bar with the non-dominant hand and subsequently matched (task HP) or mirrored (task HM) its orientation on a test bar with the dominant hand. Visual information was provided by means of informative vision with participants having full view of the test bar, while the reference bar was blocked from their view (task VHP). To decrease the egocentric bias of the hands, participants also performed a visual haptic parallelity drawing task (task VHPD) using an arm-centered reference frame, by drawing the orientation of the reference bar. In all tasks, the distance between and orientation of the bars were manipulated. A significant effect of task was found; performance improved from task HP, to VHP to VHPD, and HM. Significant effects of distance were found in the first three tasks, whereas orientation and gender effects were only significant in tasks HP and VHP. The results showed that stimulating allocentric processing by means of informative vision and reducing the egocentric bias by using an arm-centered reference frame led to most accurate performance on parallelity matching. © 2013 Elsevier B.V. All rights reserved.

  19. Affect of the unconscious: Visually suppressed angry faces modulate our decisions

    PubMed Central

    Pajtas, Petra E.; Mahon, Bradford Z.; Nakayama, Ken; Caramazza, Alfonso

    2016-01-01

    Emotional and affective processing imposes itself over cognitive processes and modulates our perception of the surrounding environment. In two experiments, we addressed the issue of whether nonconscious processing of affect can take place even under deep states of unawareness, such as those induced by interocular suppression techniques, and can elicit an affective response that can influence our understanding of the surrounding environment. In Experiment 1, participants judged the likeability of an unfamiliar item—a Chinese character—that was preceded by a face expressing a particular emotion (either happy or angry). The face was rendered invisible through an interocular suppression technique (continuous flash suppression; CFS). In Experiment 2, backward masking (BM), a less robust masking technique, was used to render the facial expressions invisible. We found that despite equivalent phenomenological suppression of the visual primes under CFS and BM, different patterns of affective processing were obtained with the two masking techniques. Under BM, nonconscious affective priming was obtained for both happy and angry invisible facial expressions. However, under CFS, nonconscious affective priming was obtained only for angry facial expressions. We discuss an interpretation of this dissociation between affective processing and visual masking techniques in terms of distinct routes from the retina to the amygdala. PMID:23224765

  20. Decreased visual detection during subliminal stimulation.

    PubMed

    Bareither, Isabelle; Villringer, Arno; Busch, Niko A

    2014-10-17

    What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.

  1. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  2. The Processing of Visual and Phonological Configurations of Chinese One- and Two-Character Words in a Priming Task of Semantic Categorization.

    PubMed

    Ma, Bosen; Wang, Xiaoyun; Li, Degao

    2015-01-01

    To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.

  3. Perceptual load corresponds with factors known to influence visual search

    PubMed Central

    Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.

    2014-01-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258

  4. Glossiness and perishable food quality: visual freshness judgment of fish eyes based on luminance distribution.

    PubMed

    Murakoshi, Takuma; Masuda, Tomohiro; Utsumi, Ken; Tsubota, Kazuo; Wada, Yuji

    2013-01-01

    Previous studies have reported the effects of statistics of luminance distribution on visual freshness perception using pictures which included the degradation process of food samples. However, these studies did not examine the effect of individual differences between the same kinds of food. Here we elucidate whether luminance distribution would continue to have a significant effect on visual freshness perception even if visual stimuli included individual differences in addition to the degradation process of foods. We took pictures of the degradation of three fishes over 3.29 hours in a controlled environment, then cropped square patches of their eyes from the original images as visual stimuli. Eleven participants performed paired comparison tests judging the visual freshness of the fish eyes at three points of degradation. Perceived freshness scores (PFS) were calculated using the Bradley-Terry Model for each image. The ANOVA revealed that the PFS for each fish decreased as the degradation time increased; however, the differences in the PFS between individual fish was larger for the shorter degradation time, and smaller for the longer degradation time. A multiple linear regression analysis was conducted in order to determine the relative importance of the statistics of luminance distribution of the stimulus images in predicting PFS. The results show that standard deviation and skewness in luminance distribution have a significant influence on PFS. These results show that even if foodstuffs contain individual differences, visual freshness perception and changes in luminance distribution correlate with degradation time.

  5. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    PubMed Central

    Lupyan, Gary; Spivey, Michael J.

    2010-01-01

    Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646

  6. Gravity influences top-down signals in visual processing.

    PubMed

    Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph

    2014-01-01

    Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.

  7. The unique role of the visual word form area in reading.

    PubMed

    Dehaene, Stanislas; Cohen, Laurent

    2011-06-01

    Reading systematically activates the left lateral occipitotemporal sulcus, at a site known as the visual word form area (VWFA). This site is reproducible across individuals/scripts, attuned to reading-specific processes, and partially selective for written strings relative to other categories such as line drawings. Lesions affecting the VWFA cause pure alexia, a selective deficit in word recognition. These findings must be reconciled with the fact that human genome evolution cannot have been influenced by such a recent and culturally variable activity as reading. Capitalizing on recent functional magnetic resonance imaging experiments, we provide strong corroborating evidence for the hypothesis that reading acquisition partially recycles a cortical territory evolved for object and face recognition, the prior properties of which influenced the form of writing systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. The influence of naturalistic, directionally non-specific motion on the spatial deployment of visual attention in right-hemispheric stroke.

    PubMed

    Cazzoli, Dario; Hopfner, Simone; Preisig, Basil; Zito, Giuseppe; Vanbellingen, Tim; Jäger, Michael; Nef, Tobias; Mosimann, Urs; Bohlhalter, Stephan; Müri, René M; Nyffeler, Thomas

    2016-11-01

    An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: (a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, (b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Object-Based Attention Overrides Perceptual Load to Modulate Visual Distraction

    ERIC Educational Resources Information Center

    Cosman, Joshua D.; Vecera, Shaun P.

    2012-01-01

    The ability to ignore task-irrelevant information and overcome distraction is central to our ability to efficiently carry out a number of tasks. One factor shown to strongly influence distraction is the perceptual load of the task being performed; as the perceptual load of task-relevant information processing increases, the likelihood that…

  10. The Rise of Data in Education Systems: Collection, Visualization and Use

    ERIC Educational Resources Information Center

    Lawn, Martin, Ed.

    2013-01-01

    The growth of education systems and the construction of the state have always been connected. The processes of governing education systems always utilized data through a range of administrative records, pupil testing, efficiency surveys and international projects. By the late twentieth century, quantitative data had gained enormous influence in…

  11. Distraction Control Processes in Free Recall: Benefits and Costs to Performance

    ERIC Educational Resources Information Center

    Marsh, John E.; Sörqvist, Patrik; Hodgetts, Helen M.; Beaman, C. Philip; Jones, Dylan M.

    2015-01-01

    How is semantic memory influenced by individual differences under conditions of distraction? This question was addressed by observing how participants recalled visual target words-drawn from a single category-while ignoring spoken distractor words that were members of either the same or a different (single) category. Working memory capacity (WMC)…

  12. Neural basis of imprinting behavior in chicks.

    PubMed

    Nakamori, Tomoharu; Maekawa, Fumihiko; Sato, Katsushige; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2013-01-01

    Newly hatched chicks memorize the characteristics of the first moving object they encounter, and subsequently show a preference for it. This "imprinting" behavior is an example of infant learning and is elicited by visual and/or auditory cues. Visual information of imprinting stimuli in chicks is first processed in the visual Wulst (VW), a telencephalic area corresponding to the mammalian visual cortex, congregates in the core region of the hyperpallium densocellulare (HDCo) cells, and transmitted to the intermediate medial mesopallium (IMM), a region similar to the mammalian association cortex. The imprinting memory is stored in the IMM, and activities of IMM neurons are altered by imprinting. Imprinting also induces functional and structural plastic changes of neurons in the circuit that links the VW and the IMM. Of these neurons, the activity of the HDCo cells is strongly influenced by imprinting. Expression and modulation of NR2B subunit-containing N-methyl-D-aspartate (NMDA) receptors in the HDCo cells are crucial for plastic changes in this circuit as well as the process of visual imprinting. Thus, elucidation of cellular and molecular mechanisms underlying the plastic changes that occurred in the HDCo cells may provide useful knowledge about infant learning. © 2012 The Authors Development, Growth & Differentiation © 2012 Japanese Society of Developmental Biologists.

  13. Improving visual search in instruction manuals using pictograms.

    PubMed

    Kovačević, Dorotea; Brozović, Maja; Možina, Klementina

    2016-11-01

    Instruction manuals provide important messages about the proper use of a product. They should communicate in such a way that they facilitate users' searches for specific information. Despite the increasing research interest in visual search, there is a lack of empirical knowledge concerning the role of pictograms in search performance during the browsing of a manual's pages. This study investigates how the inclusion of pictograms improves the search for the target information. Furthermore, it examines whether this search process is influenced by the visual similarity between the pictograms and the searched for information. On the basis of eye-tracking measurements, as objective indicators of the participants' visual attention, it was found that pictograms can be a useful element of search strategy. Another interesting finding was that boldface highlighting is a more effective method for improving user experience in information seeking, rather than the similarity between the pictorial and adjacent textual information. Implications for designing effective user manuals are discussed. Practitioner Summary: Users often view instruction manuals with the aim of finding specific information. We used eye-tracking technology to examine different manual pages in order to improve the user's visual search for target information. The results indicate that the use of pictograms and bold highlighting of relevant information facilitate the search process.

  14. Foreground-background segmentation and attention: a change blindness study.

    PubMed

    Mazza, Veronica; Turatto, Massimo; Umiltà, Carlo

    2005-01-01

    One of the most debated questions in visual attention research is what factors affect the deployment of attention in the visual scene? Segmentation processes are influential factors, providing candidate objects for further attentional selection, and the relevant literature has concentrated on how figure-ground segmentation mechanisms influence visual attention. However, another crucial process, namely foreground-background segmentation, seems to have been neglected. By using a change blindness paradigm, we explored whether attention is preferentially allocated to the foreground elements or to the background ones. The results indicated that unless attention was voluntarily deployed to the background, large changes in the color of its elements remained unnoticed. In contrast, minor changes in the foreground elements were promptly reported. Differences in change blindness between the two regions of the display indicate that attention is, by default, biased toward the foreground elements. This also supports the phenomenal observations made by Gestaltists, who demonstrated the greater salience of the foreground than the background.

  15. Gain control by layer six in cortical circuits of vision.

    PubMed

    Olsen, Shawn R; Bortone, Dante S; Adesnik, Hillel; Scanziani, Massimo

    2012-02-22

    After entering the cerebral cortex, sensory information spreads through six different horizontal neuronal layers that are interconnected by vertical axonal projections. It is believed that through these projections layers can influence each other's response to sensory stimuli, but the specific role that each layer has in cortical processing is still poorly understood. Here we show that layer six in the primary visual cortex of the mouse has a crucial role in controlling the gain of visually evoked activity in neurons of the upper layers without changing their tuning to orientation. This gain modulation results from the coordinated action of layer six intracortical projections to superficial layers and deep projections to the thalamus, with a substantial role of the intracortical circuit. This study establishes layer six as a major mediator of cortical gain modulation and suggests that it could be a node through which convergent inputs from several brain areas can regulate the earliest steps of cortical visual processing.

  16. Salience of the lambs: a test of the saliency map hypothesis with pictures of emotive objects.

    PubMed

    Humphrey, Katherine; Underwood, Geoffrey; Lambert, Tony

    2012-01-25

    Humans have an ability to rapidly detect emotive stimuli. However, many emotional objects in a scene are also highly visually salient, which raises the question of how dependent the effects of emotionality are on visual saliency and whether the presence of an emotional object changes the power of a more visually salient object in attracting attention. Participants were shown a set of positive, negative, and neutral pictures and completed recall and recognition memory tests. Eye movement data revealed that visual saliency does influence eye movements, but the effect is reliably reduced when an emotional object is present. Pictures containing negative objects were recognized more accurately and recalled in greater detail, and participants fixated more on negative objects than positive or neutral ones. Initial fixations were more likely to be on emotional objects than more visually salient neutral ones, suggesting that the processing of emotional features occurs at a very early stage of perception.

  17. Improving spatial perception in 5-yr.-old Spanish children.

    PubMed

    Jiménez, Andrés Canto; Sicilia, Antonio Oña; Vera, Juan Granda

    2007-06-01

    Assimilation of distance perception was studied in 70 Spanish primary school children. This assimilation involves the generation of projective images which are acquired through two mechanisms. One mechanism is spatial perception, wherein perceptual processes develop ensuring successful immersion in space and the acquisition of visual cues which a person may use to interpret images seen in the distance. The other mechanism is movement through space so that these images are produced. The present study evaluated the influence on improvements in spatial perception of using increasingly larger spaces for training sessions within a motor skills program. Visual parameters were measured in relation to the capture and tracking of moving objects or ocular motility and speed of detection or visual reaction time. Analysis showed that for the group trained in increasingly larger spaces, ocular motility and visual reaction time were significantly improved during. different phases of the program.

  18. Does manipulating the speed of visual flow in virtual reality change distance estimation while walking in Parkinson's disease?

    PubMed

    Ehgoetz Martens, Kaylena A; Ellard, Colin G; Almeida, Quincy J

    2015-03-01

    Although dopaminergic replacement therapy is believed to improve sensory processing in PD, while delayed perceptual speed is thought to be caused by a predominantly cholinergic deficit, it is unclear whether sensory-perceptual deficits are a result of corrupt sensory processing, or a delay in updating perceived feedback during movement. The current study aimed to examine these two hypotheses by manipulating visual flow speed and dopaminergic medication to examine which influenced distance estimation in PD. Fourteen PD and sixteen HC participants were instructed to estimate the distance of a remembered target by walking to the position the target formerly occupied. This task was completed in virtual reality in order to manipulate the visual flow (VF) speed in real time. Three conditions were carried out: (1) BASELINE: VF speed was equal to participants' real-time movement speed; (2) SLOW: VF speed was reduced by 50 %; (2) FAST: VF speed was increased by 30 %. Individuals with PD performed the experiment in their ON and OFF state. PD demonstrated significantly greater judgement error during BASELINE and FAST conditions compared to HC, although PD did not improve their judgement error during the SLOW condition. Additionally, PD had greater variable error during baseline compared to HC; however, during the SLOW conditions, PD had significantly less variable error compared to baseline and similar variable error to HC participants. Overall, dopaminergic medication did not significantly influence judgement error. Therefore, these results suggest that corrupt processing of sensory information is the main contributor to sensory-perceptual deficits during movement in PD rather than delayed updating of sensory feedback.

  19. The Influence of Photoreceptor Size and Distribution on Optical Sensitivity in the Eyes of Lanternfishes (Myctophidae)

    PubMed Central

    de Busserolles, Fanny; Fitzpatrick, John L.; Marshall, N. Justin; Collin, Shaun P.

    2014-01-01

    The mesopelagic zone of the deep-sea (200-1000 m) is characterised by exponentially diminishing levels of downwelling sunlight and by the predominance of bioluminescence emissions. The ability of mesopelagic organisms to detect and behaviourally react to downwelling sunlight and/or bioluminescence will depend on the visual task and ultimately on the eyes and their capacity for detecting low levels of illumination and intermittent point sources of bioluminescent light. In this study, we investigate the diversity of the visual system of the lanternfish (Myctophidae). We focus specifically on the photoreceptor cells by examining their size, arrangement, topographic distribution and contribution to optical sensitivity in 53 different species from 18 genera. We also examine the influence(s) of both phylogeny and ecology on these photoreceptor variables using phylogenetic comparative analyses in order to understand the constraints placed on the visual systems of this large group of mesopelagic fishes at the first stage of retinal processing. We report great diversity in the visual system of the Myctophidae at the level of the photoreceptors. Photoreceptor distribution reveals clear interspecific differences in visual specialisations (areas of high rod photoreceptor density), indicating potential interspecific differences in interactions with prey, predators and/or mates. A great diversity in photoreceptor design (length and diameter) and density is also present. Overall, the myctophid eye is very sensitive compared to other teleosts and each species seems to be specialised for the detection of a specific signal (downwelling light or bioluminescence), potentially reflecting different visual demands for survival. Phylogenetic comparative analyses highlight several relationships between photoreceptor characteristics and the ecological variables tested (depth distribution and luminous tissue patterns). Depth distribution at night was a significant factor in most of the models tested, indicating that vision at night is of great importance for lanternfishes and may drive the evolution of their photoreceptor design. PMID:24927016

  20. ‘If you are good, I get better’: the role of social hierarchy in perceptual decision-making

    PubMed Central

    Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria

    2014-01-01

    So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top–down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. PMID:23946003

  1. Top-down modulation from inferior frontal junction to FEFs and intraparietal sulcus during short-term memory for visual features.

    PubMed

    Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark

    2013-11-01

    Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.

  2. Using Eye Tracking to Explore Consumers' Visual Behavior According to Their Shopping Motivation in Mobile Environments.

    PubMed

    Hwang, Yoon Min; Lee, Kun Chang

    2017-07-01

    Despite a strong shift to mobile shopping trends, many in-depth questions about mobile shoppers' visual behaviors in mobile shopping environments remain unaddressed. This study aims to answer two challenging research questions (RQs): (a) how much does shopping motivation like goal orientation and recreation influence mobile shoppers' visual behavior toward displays of shopping information on a mobile shopping screen and (b) how much of mobile shoppers' visual behavior influences their purchase intention for the products displayed on a mobile shopping screen? An eye-tracking approach is adopted to answer the RQs empirically. The experimental results showed that goal-oriented shoppers paid closer attention to products' information areas to meet their shopping goals. Their purchase intention was positively influenced by their visual attention to the two areas of interest such as product information and consumer opinions. In contrast, recreational shoppers tended to visually fixate on the promotion area, which positively influences their purchase intention. The results contribute to understanding mobile shoppers' visual behaviors and shopping intentions from the perspective of mindset theory.

  3. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  4. Decoding visual object categories in early somatosensory cortex.

    PubMed

    Smith, Fraser W; Goodale, Melvyn A

    2015-04-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.

  5. Decoding Visual Object Categories in Early Somatosensory Cortex

    PubMed Central

    Smith, Fraser W.; Goodale, Melvyn A.

    2015-01-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136

  6. [Eccentricity-dependent influence of amodal completion on visual search].

    PubMed

    Shirama, Aya; Ishiguchi, Akira

    2009-06-01

    Does amodal completion occur homogeneously across the visual field? Rensink and Enns (1998) found that visual search for efficiently-detected fragments became inefficient when observers perceived the fragments as a partially-occluded version of a distractor due to a rapid completion process. We examined the effect of target eccentricity in Rensink and Enns's tasks and a few additional tasks by magnifying the stimuli in the peripheral visual field to compensate for the loss of spatial resolution (M-scaling; Rovamo & Virsu, 1979). We found that amodal completion disrupted the efficient search for the salient fragments (i.e., target) even when the target was presented at high eccentricity (within 17 deg). In addition, the configuration effect of the fragments, which produced amodal completion, increased with eccentricity while the same target was detected efficiently at the lowest eccentricity. This eccentricity effect is different from a previously-reported eccentricity effect where M-scaling was effective (Carrasco & Frieder, 1997). These findings indicate that the visual system has a basis for rapid completion across the visual field, but the stimulus representations constructed through amodal completion have eccentricity-dependent properties.

  7. The pervasive nature of unconscious social information processing in executive control

    PubMed Central

    Prabhakaran, Ranjani; Gray, Jeremy R.

    2012-01-01

    Humans not only have impressive executive abilities, but we are also fundamentally social creatures. In the cognitive neuroscience literature, it has long been assumed that executive control mechanisms, which play a critical role in guiding goal-directed behavior, operate on consciously processed information. Although more recent evidence suggests that unconsciously processed information can also influence executive control, most of this literature has focused on visual masked priming paradigms. However, the social psychological literature has demonstrated that unconscious influences are pervasive, and social information can unintentionally influence a wide variety of behaviors, including some that are likely to require executive abilities. For example, social information can unconsciously influence attention processes, such that simply instructing participants to describe a previous situation in which they had power over someone or someone else had power over them has been shown to unconsciously influence their attentional focus abilities, a key aspect of executive control. In the current review, we consider behavioral and neural findings from a variety of paradigms, including priming of goals and social hierarchical roles, as well as interpersonal interactions, in order to highlight the pervasive nature of social influences on executive control. These findings suggest that social information can play a critical role in executive control, and that this influence often occurs in an unconscious fashion. We conclude by suggesting further avenues of research for investigation of the interplay between social factors and executive control. PMID:22557956

  8. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    PubMed

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Insight solutions are correct more often than analytic solutions

    PubMed Central

    Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark

    2016-01-01

    How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960

  10. Emotion and anxiety potentiate the way attention alters visual appearance.

    PubMed

    Barbot, Antoine; Carrasco, Marisa

    2018-04-12

    The ability to swiftly detect and prioritize the processing of relevant information around us is critical for the way we interact with our environment. Selective attention is a key mechanism that serves this purpose, improving performance in numerous visual tasks. Reflexively attending to sudden information helps detect impeding threat or danger, a possible reason why emotion modulates the way selective attention affects perception. For instance, the sudden appearance of a fearful face potentiates the effects of exogenous (involuntary, stimulus-driven) attention on performance. Internal states such as trait anxiety can also modulate the impact of attention on early visual processing. However, attention does not only improve performance; it also alters the way visual information appears to us, e.g. by enhancing perceived contrast. Here we show that emotion potentiates the effects of exogenous attention on both performance and perceived contrast. Moreover, we found that trait anxiety mediates these effects, with stronger influences of attention and emotion in anxious observers. Finally, changes in performance and appearance correlated with each other, likely reflecting common attentional modulations. Altogether, our findings show that emotion and anxiety interact with selective attention to truly alter how we see.

  11. Scoliosis brace design: influence of visual aesthetics on user acceptance and compliance.

    PubMed

    Law, Derry; Cheung, Mei-Chun; Yip, Joanne; Yick, Kit-Lun; Wong, Christina

    2017-06-01

    Adolescent idiopathic scoliosis is a common condition found in adolescents. A rigid brace is often prescribed as the treatment for this spinal deformity, which negatively affects user compliance due to the discomfort caused by the brace, and the psychological distress resulting from its appearance. However, the latter, which is the impact of visual aesthetics, has not been thoroughly studied for scoliosis braces. Therefore, a qualitative study with in-depth interviews has been carried out with 10 participants who have a Cobb angle of 20°-30° to determine the impact of visual aesthetics on user acceptance and compliance towards the brace. It is found that co-designing with patients on the aesthetic aspects of the surface design of the brace increases the level of user compliance and induces positive user perception. Therefore, aesthetic preferences need to be taken into consideration in the design process of braces. Practitioner Summary: The impact of visual aesthetics on user acceptance and compliance towards a rigid brace for scoliosis is investigated. The findings indicate that an aesthetically pleasing brace and the involvement of patients in the design process of the brace are important for increasing user compliance and addressing psychological issues during treatment.

  12. Fear Processing in Dental Phobia during Crossmodal Symptom Provocation: An fMRI Study

    PubMed Central

    Maslowski, Nina Isabel; Wittchen, Hans-Ulrich; Lueken, Ulrike

    2014-01-01

    While previous studies successfully identified the core neural substrates of the animal subtype of specific phobia, only few and inconsistent research is available for dental phobia. These findings might partly relate to the fact that, typically, visual stimuli were employed. The current study aimed to investigate the influence of stimulus modality on neural fear processing in dental phobia. Thirteen dental phobics (DP) and thirteen healthy controls (HC) attended a block-design functional magnetic resonance imaging (fMRI) symptom provocation paradigm encompassing both visual and auditory stimuli. Drill sounds and matched neutral sinus tones served as auditory stimuli and dentist scenes and matched neutral videos as visual stimuli. Group comparisons showed increased activation in the insula, anterior cingulate cortex, orbitofrontal cortex, and thalamus in DP compared to HC during auditory but not visual stimulation. On the contrary, no differential autonomic reactions were observed in DP. Present results are largely comparable to brain areas identified in animal phobia, but also point towards a potential downregulation of autonomic outflow by neural fear circuits in this disorder. Findings enlarge our knowledge about neural correlates of dental phobia and may help to understand the neural underpinnings of the clinical and physiological characteristics of the disorder. PMID:24738049

  13. Hunger and satiety in anorexia nervosa: fMRI during cognitive processing of food pictures.

    PubMed

    Santel, Stephanie; Baving, Lioba; Krauel, Kerstin; Münte, Thomas F; Rotte, Michael

    2006-10-09

    Neuroimaging studies of visually presented food stimuli in patients with anorexia nervosa have demonstrated decreased activations in inferior parietal and visual occipital areas, and increased frontal activations relative to healthy persons, but so far no inferences could be drawn with respect to the influence of hunger or satiety. Thirteen patients with AN and 10 healthy control subjects (aged 13-21) rated visual food and non-food stimuli for pleasantness during functional magnetic resonance imaging (fMRI) in a hungry and a satiated state. AN patients rated food as less pleasant than controls. When satiated, AN patients showed decreased activation in left inferior parietal cortex relative to controls. When hungry, AN patients displayed weaker activation of the right visual occipital cortex than healthy controls. Food stimuli during satiety compared with hunger were associated with stronger right occipital activation in patients and with stronger activation in left lateral orbitofrontal cortex, the middle portion of the right anterior cingulate, and left middle temporal gyrus in controls. The observed group differences in the fMRI activation to food pictures point to decreased food-related somatosensory processing in AN during satiety and to attentional mechanisms during hunger that might facilitate restricted eating in AN.

  14. Multi-modal distraction: insights from children's limited attention.

    PubMed

    Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia

    2015-03-01

    How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  16. Visual attention in a complex search task differs between honeybees and bumblebees.

    PubMed

    Morawetz, Linde; Spaethe, Johannes

    2012-07-15

    Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.

  17. Brightness masking is modulated by disparity structure.

    PubMed

    Pelekanos, Vassilis; Ban, Hiroshi; Welchman, Andrew E

    2015-05-01

    The luminance contrast at the borders of a surface strongly influences surface's apparent brightness, as demonstrated by a number of classic visual illusions. Such phenomena are compatible with a propagation mechanism believed to spread contrast information from borders to the interior. This process is disrupted by masking, where the perceived brightness of a target is reduced by the brief presentation of a mask (Paradiso & Nakayama, 1991), but the exact visual stage that this happens remains unclear. In the present study, we examined whether brightness masking occurs at a monocular-, or a binocular-level of the visual hierarchy. We used backward masking, whereby a briefly presented target stimulus is disrupted by a mask coming soon afterwards, to show that brightness masking is affected by binocular stages of the visual processing. We manipulated the 3-D configurations (slant direction) of the target and mask and measured the differential disruption that masking causes on brightness estimation. We found that the masking effect was weaker when stimuli had a different slant. We suggest that brightness masking is partly mediated by mid-level neuronal mechanisms, at a stage where binocular disparity edge structure has been extracted. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Early and late beta-band power reflect audiovisual perception in the McGurk illusion

    PubMed Central

    Senkowski, Daniel; Keil, Julian

    2015-01-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13–30 Hz) at short (0–500 ms) and long (500–800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. PMID:25568160

  19. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    PubMed

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  20. Ground-plane influences on size estimation in early visual processing.

    PubMed

    Champion, Rebecca A; Warren, Paul A

    2010-07-21

    Ground-planes have an important influence on the perception of 3D space (Gibson, 1950) and it has been shown that the assumption that a ground-plane is present in the scene plays a role in the perception of object distance (Bruno & Cutting, 1988). Here, we investigate whether this influence is exerted at an early stage of processing, to affect the rapid estimation of 3D size. Participants performed a visual search task in which they searched for a target object that was larger or smaller than distracter objects. Objects were presented against a background that contained either a frontoparallel or slanted 3D surface, defined by texture gradient cues. We measured the effect on search performance of target location within the scene (near vs. far) and how this was influenced by scene orientation (which, e.g., might be consistent with a ground or ceiling plane, etc.). In addition, we investigated how scene orientation interacted with texture gradient information (indicating surface slant), to determine how these separate cues to scene layout were combined. We found that the difference in target detection performance between targets at the front and rear of the simulated scene was maximal when the scene was consistent with a ground-plane - consistent with the use of an elevation cue to object distance. In addition, we found a significant increase in the size of this effect when texture gradient information (indicating surface slant) was present, but no interaction between texture gradient and scene orientation information. We conclude that scene orientation plays an important role in the estimation of 3D size at an early stage of processing, and suggest that elevation information is linearly combined with texture gradient information for the rapid estimation of 3D size. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  2. A Closed-Loop Model of Operator Visual Attention, Situation Awareness, and Performance Across Automation Mode Transitions.

    PubMed

    Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M

    2017-03-01

    This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.

  3. Coordinates of Human Visual and Inertial Heading Perception.

    PubMed

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.

  4. Coordinates of Human Visual and Inertial Heading Perception

    PubMed Central

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865

  5. The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment.

    PubMed

    Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo

    2016-01-01

    The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

  6. The Influence of Learning Strategies in the Acquisition, Retention, and Transfer of a Visual Tracking Task

    DTIC Science & Technology

    1979-08-01

    Psychology, Psychoanalysis and Neurology X. N. Y.: Van Nostrand Reinhold, 1977. Craik , F. I. M., & Lockhart , R. S. Levels of processing : A framework for...Morris, C. D., & Stein, B. S. Some general constraints oil learning and memory research. In F. I. M. Craik & L. S. Cermak (Eds.), Levels of processing ... Craik & Lockhart , 1972; Craik & Tulving, 1975). Although the dependent measures differ, the conclusions drawn remain similar. Strategy usage has a

  7. The nature of short-term consolidation in visual working memory.

    PubMed

    Ricker, Timothy J; Hardman, Kyle O

    2017-11-01

    Short-term consolidation is the process by which stable working memory representations are created. This process is fundamental to cognition yet poorly understood. The present work examines short-term consolidation using a Bayesian hierarchical model of visual working memory recall to determine the underlying processes at work. Our results show that consolidation functions largely through changing the proportion of memory items successfully maintained until test. Although there was some evidence that consolidation affects representational precision, this change was modest and could not account for the bulk of the consolidation effect on memory performance. The time course of the consolidation function and selective influence of consolidation on specific serial positions strongly indicates that short-term consolidation induces an attentional blink. The blink leads to deficits in memory for the immediately following item when time pressure is introduced. Temporal distinctiveness accounts of the consolidation process are tested and ruled out. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Interpersonal touch suppresses visual processing of aversive stimuli

    PubMed Central

    Kawamichi, Hiroaki; Kitada, Ryo; Yoshihara, Kazufumi; Takahashi, Haruka K.; Sadato, Norihiro

    2015-01-01

    Social contact is essential for survival in human society. A previous study demonstrated that interpersonal contact alleviates pain-related distress by suppressing the activity of its underlying neural network. One explanation for this is that attention is shifted from the cause of distress to interpersonal contact. To test this hypothesis, we conducted a functional MRI (fMRI) study wherein eight pairs of close female friends rated the aversiveness of aversive and non-aversive visual stimuli under two conditions: joining hands either with a rubber model (rubber-hand condition) or with a close friend (human-hand condition). Subsequently, participants rated the overall comfortableness of each condition. The rating result after fMRI indicated that participants experienced greater comfortableness during the human-hand compared to the rubber-hand condition, whereas aversiveness ratings during fMRI were comparable across conditions. The fMRI results showed that the two conditions commonly produced aversive-related activation in both sides of the visual cortex (including V1, V2, and V5). An interaction between aversiveness and hand type showed rubber-hand-specific activation for (aversive > non-aversive) in other visual areas (including V1, V2, V3, and V4v). The effect of interpersonal contact on the processing of aversive stimuli was negatively correlated with the increment of attentional focus to aversiveness measured by a pain-catastrophizing scale. These results suggest that interpersonal touch suppresses the processing of aversive visual stimuli in the occipital cortex. This effect covaried with aversiveness-insensitivity, such that aversive-insensitive individuals might require a lesser degree of attentional capture to aversive-stimulus processing. As joining hands did not influence the subjective ratings of aversiveness, interpersonal touch may operate by redirecting excessive attention away from aversive characteristics of the stimuli. PMID:25904856

  9. Altered Connectivity of the Balance Processing Network After Tongue Stimulation in Balance-Impaired Individuals

    PubMed Central

    Tyler, Mitchell E.; Danilov, Yuri P.; Kaczmarek, Kurt A.; Meyerand, Mary E.

    2013-01-01

    Abstract Some individuals with balance impairment have hypersensitivity of the motion-sensitive visual cortices (hMT+) compared to healthy controls. Previous work showed that electrical tongue stimulation can reduce the exaggerated postural sway induced by optic flow in this subject population and decrease the hypersensitive response of hMT+. Additionally, a region within the brainstem (BS), likely containing the vestibular and trigeminal nuclei, showed increased optic flow-induced activity after tongue stimulation. The aim of this study was to understand how the modulation induced by tongue stimulation affects the balance-processing network as a whole and how modulation of BS structures can influence cortical activity. Four volumes of interest, discovered in a general linear model analysis, constitute major contributors to the balance-processing network. These regions were entered into a dynamic causal modeling analysis to map the network and measure any connection or topology changes due to the stimulation. Balance-impaired individuals had downregulated response of the primary visual cortex (V1) to visual stimuli but upregulated modulation of the connection between V1 and hMT+ by visual motion compared to healthy controls (p≤1E–5). This upregulation was decreased to near-normal levels after stimulation. Additionally, the region within the BS showed increased response to visual motion after stimulation compared to both prestimulation and controls. Stimulation to the tongue enters the central nervous system at the BS but likely propagates to the cortex through supramodal information transfer. We present a model to explain these brain responses that utilizes an anatomically present, but functionally dormant pathway of information flow within the processing network. PMID:23216162

  10. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    ERIC Educational Resources Information Center

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  11. Functional and structural comparison of visual lateralization in birds – similar but still different

    PubMed Central

    Ströckens, Felix

    2014-01-01

    Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system), constancy of effects (transient vs. permanent), and the hemisphere receiving stronger bilateral input (right vs. left). These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: (1) visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. (2) Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioral asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top-down systems. PMID:24723898

  12. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  13. Distinct GABAergic targets of feedforward and feedback connections between lower and higher areas of rat visual cortex.

    PubMed

    Gonchar, Yuri; Burkhalter, Andreas

    2003-11-26

    Processing of visual information is performed in different cortical areas that are interconnected by feedforward (FF) and feedback (FB) pathways. Although FF and FB inputs are excitatory, their influences on pyramidal neurons also depend on the outputs of GABAergic neurons, which receive FF and FB inputs. Rat visual cortex contains at least three different families of GABAergic neurons that express parvalbumin (PV), calretinin (CR), and somatostatin (SOM) (Gonchar and Burkhalter, 1997). To examine whether pathway-specific inhibition (Shao and Burkhalter, 1996) is attributable to distinct connections with GABAergic neurons, we traced FF and FB inputs to PV, CR, and SOM neurons in layers 1-2/3 of area 17 and the secondary lateromedial area in rat visual cortex. We found that in layer 2/3 maximally 2% of FF and FB inputs go to CR and SOM neurons. This contrasts with 12-13% of FF and FB inputs onto layer 2/3 PV neurons. Unlike inputs to layer 2/3, connections to layer 1, which contains CR but lacks SOM and PV somata, are pathway-specific: 21% of FB inputs go to CR neurons, whereas FF inputs to layer 1 and its CR neurons are absent. These findings suggest that FF and FB influences on layer 2/3 pyramidal neurons mainly involve disynaptic connections via PV neurons that control the spike outputs to axons and proximal dendrites. Unlike FF input, FB input in addition makes a disynaptic link via CR neurons, which may influence the excitability of distal pyramidal cell dendrites in layer 1.

  14. Reading "Sky" and Seeing a Cloud: On the Relevance of Events for Perceptual Simulation

    ERIC Educational Resources Information Center

    Ostarek, Markus; Vigliocco, Gabriella

    2017-01-01

    Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained…

  15. The Influence of Typeface on Students' Perceptions of Online Instructors

    ERIC Educational Resources Information Center

    Louch, Michelle O'Brien; Stork, Elizabeth

    2014-01-01

    At its base, advertising is the process of using visual images and words to attract and convince consumers that a certain product has certain attributes. The same effect exists in electronic communication, strongly so in online courses where most if not all interaction between instructor and student is in writing. Arguably, if consumers make…

  16. Maternal Socioeconomic Status Influences the Range of Expectations during Language Comprehension in Adulthood

    ERIC Educational Resources Information Center

    Troyer, Melissa; Borovsky, Arielle

    2017-01-01

    In infancy, maternal socioeconomic status (SES) is associated with real-time language processing skills, but whether or not (and if so, how) this relationship carries into adulthood is unknown. We explored the effects of maternal SES in college-aged adults on eye-tracked, spoken sentence comprehension tasks using the visual world paradigm. When…

  17. Enhanced dimension-specific visual working memory in grapheme-color synesthesia.

    PubMed

    Terhune, Devin Blair; Wudarczyk, Olga Anna; Kochuparampil, Priya; Cohen Kadosh, Roi

    2013-10-01

    There is emerging evidence that the encoding of visual information and the maintenance of this information in a temporarily accessible state in working memory rely on the same neural mechanisms. A consequence of this overlap is that atypical forms of perception should influence working memory. We examined this by investigating whether having grapheme-color synesthesia, a condition characterized by the involuntary experience of color photisms when reading or representing graphemes, would confer benefits on working memory. Two competing hypotheses propose that superior memory in synesthesia results from information being coded in two information channels (dual-coding) or from superior dimension-specific visual processing (enhanced processing). We discriminated between these hypotheses in three n-back experiments in which controls and synesthetes viewed inducer and non-inducer graphemes and maintained color or grapheme information in working memory. Synesthetes displayed superior color working memory than controls for both grapheme types, whereas the two groups did not differ in grapheme working memory. Further analyses excluded the possibilities of enhanced working memory among synesthetes being due to greater color discrimination, stimulus color familiarity, or bidirectionality. These results reveal enhanced dimension-specific visual working memory in this population and supply further evidence for a close relationship between sensory processing and the maintenance of sensory information in working memory. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Top-down processing of symbolic meanings modulates the visual word form area.

    PubMed

    Song, Yiying; Tian, Moqian; Liu, Jia

    2012-08-29

    Functional magnetic resonance imaging (fMRI) studies on humans have identified a region in the left middle fusiform gyrus consistently activated by written words. This region is called the visual word form area (VWFA). Recently, a hypothesis, called the interactive account, is proposed that to effectively analyze the bottom-up visual properties of words, the VWFA receives predictive feedback from higher-order regions engaged in processing sounds, meanings, or actions associated with words. Further, this top-down influence on the VWFA is independent of stimulus formats. To test this hypothesis, we used fMRI to examine whether a symbolic nonword object (e.g., the Eiffel Tower) intended to represent something other than itself (i.e., Paris) could activate the VWFA. We found that scenes associated with symbolic meanings elicited a higher VWFA response than those not associated with symbolic meanings, and such top-down modulation on the VWFA can be established through short-term associative learning, even across modalities. In addition, the magnitude of the symbolic effect observed in the VWFA was positively correlated with the subjective experience on the strength of symbol-referent association across individuals. Therefore, the VWFA is likely a neural substrate for the interaction of the top-down processing of symbolic meanings with the analysis of bottom-up visual properties of sensory inputs, making the VWFA the location where the symbolic meaning of both words and nonword objects is represented.

  19. Early visual ERPs are influenced by individual emotional skills.

    PubMed

    Meaux, Emilie; Roux, Sylvie; Batty, Magali

    2014-08-01

    Processing information from faces is crucial to understanding others and to adapting to social life. Many studies have investigated responses to facial emotions to provide a better understanding of the processes and the neural networks involved. Moreover, several studies have revealed abnormalities of emotional face processing and their neural correlates in affective disorders. The aim of this study was to investigate whether early visual event-related potentials (ERPs) are affected by the emotional skills of healthy adults. Unfamiliar faces expressing the six basic emotions were presented to 28 young adults while recording visual ERPs. No specific task was required during the recording. Participants also completed the Social Skills Inventory (SSI) which measures social and emotional skills. The results confirmed that early visual ERPs (P1, N170) are affected by the emotions expressed by a face and also demonstrated that N170 and P2 are correlated to the emotional skills of healthy subjects. While N170 is sensitive to the subject's emotional sensitivity and expressivity, P2 is modulated by the ability of the subjects to control their emotions. We therefore suggest that N170 and P2 could be used as individual markers to assess strengths and weaknesses in emotional areas and could provide information for further investigations of affective disorders. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Masking of Figure-Ground Texture and Single Targets by Surround Inhibition: A Computational Spiking Model

    PubMed Central

    Supèr, Hans; Romeo, August

    2012-01-01

    A visual stimulus can be made invisible, i.e. masked, by the presentation of a second stimulus. In the sensory cortex, neural responses to a masked stimulus are suppressed, yet how this suppression comes about is still debated. Inhibitory models explain masking by asserting that the mask exerts an inhibitory influence on the responses of a neuron evoked by the target. However, other models argue that the masking interferes with recurrent or reentrant processing. Using computer modeling, we show that surround inhibition evoked by ON and OFF responses to the mask suppresses the responses to a briefly presented stimulus in forward and backward masking paradigms. Our model results resemble several previously described psychophysical and neurophysiological findings in perceptual masking experiments and are in line with earlier theoretical descriptions of masking. We suggest that precise spatiotemporal influence of surround inhibition is relevant for visual detection. PMID:22393370

  1. The functional BDNF Val66Met polymorphism affects functions of pre-attentive visual sensory memory processes.

    PubMed

    Beste, Christian; Schneider, Daniel; Epplen, Jörg T; Arning, Larissa

    2011-01-01

    The brain-derived neurotrophic factor (BDNF), a member of the neurotrophin family, is involved in nerve growth and survival. Especially, a single nucleotide polymorphism (SNP) in the BDNF gene, Val66Met, has gained a lot of attention, because of its effect on activity-dependent BDNF secretion and its link to impaired memory processes. We hypothesize that the BDNF Val66Met polymorphism may have modulatory effects on the visual sensory (iconic) memory performance. Two hundred and eleven healthy German students (106 female and 105 male) were included in the data analysis. Since BDNF is also discussed to be involved in the pathogenesis of depression, we additionally tested for possible interactions with depressive mood. The BDNF Val66Met polymorphism significantly influenced iconic-memory performance, with the combined Val/Met-Met/Met genotype group revealing less time stability of information stored in iconic memory than the Val/Val group. Furthermore, this stability was positively correlated with depressive mood exclusively in the Val/Val genotype group. Thus, these results show that the BDNF Val66Met polymorphism has an effect on pre-attentive visual sensory memory processes. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender

    PubMed Central

    Rozenkrants, Bella; Polich, John

    2008-01-01

    Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987

  3. A BRDF study on the visual appearance properties of titanium in the heating process

    NASA Astrophysics Data System (ADS)

    Liu, Yanlei; Yu, Kun; Li, Longfei; Zhao, Yuejin; Liu, Zilong; Liu, Yufang

    2018-04-01

    Bidirectional reflectance distribution function (BRDF) offers complete description of the spectral and spatial characteristics of opaque materials, i.e. the visual appearance properties of materials. In this letter, the visual appearance properties of titanium in the heating process are investigated by BRDF. The reliability of our results is verified by comparing the experimental data of polytetrafluoroethylene with the reference data. The in-plane spectral BRDF in visible region of heated commercial pure Ti at different incident and reflected zenith angles are measured. The experimental result indicates that the change tendency of BRDF vs. wavelength is not influenced by incident and reflected zenith angle, which implying that the colours of Ti may be pigment colouration rather than the structural colouration. Scanning electron microscopy (SEM) and the X-ray diffraction (XRD) testing are performed, and no titanium oxides are detected. The testing results imply that the colours may be generated by intermediate products during heated process. The powder samples are prepared, and the same colours as that of flake samples indirectly prove the validity of our conclusion. In addition, the spectral BRDF of optically smooth samples are measured, the results verify the reliability of our conclusion.

  4. Micro-calibration of space and motion by photoreceptors synchronized in parallel with cortical oscillations: A unified theory of visual perception.

    PubMed

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike

    2018-01-01

    A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical faculties to the retina, while the thalamus is the link that couples the retina to the rest of the brain through activity by gamma oscillations. This novel theory lays groundwork for further research by providing a theoretical understanding that expands upon the functions of the retina, photoreceptors, and retinal plexus to include parallel processing needed to form the internal visual space that we perceive as the external world. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Competition between Visual Events Modulates the Influence of Salience during Free-Viewing of Naturalistic Videos

    PubMed Central

    Nardo, Davide; Console, Paola; Reverberi, Carlo; Macaluso, Emiliano

    2016-01-01

    In daily life the brain is exposed to a large amount of external signals that compete for processing resources. The attentional system can select relevant information based on many possible combinations of goal-directed and stimulus-driven control signals. Here, we investigate the behavioral and physiological effects of competition between distinctive visual events during free-viewing of naturalistic videos. Nineteen healthy subjects underwent functional magnetic resonance imaging (fMRI) while viewing short video-clips of everyday life situations, without any explicit goal-directed task. Each video contained either a single semantically-relevant event on the left or right side (Lat-trials), or multiple distinctive events in both hemifields (Multi-trials). For each video, we computed a salience index to quantify the lateralization bias due to stimulus-driven signals, and a gaze index (based on eye-tracking data) to quantify the efficacy of the stimuli in capturing attention to either side. Behaviorally, our results showed that stimulus-driven salience influenced spatial orienting only in presence of multiple competing events (Multi-trials). fMRI results showed that the processing of competing events engaged the ventral attention network, including the right temporoparietal junction (R TPJ) and the right inferior frontal cortex. Salience was found to modulate activity in the visual cortex, but only in the presence of competing events; while the orienting efficacy of Multi-trials affected activity in both the visual cortex and posterior parietal cortex (PPC). We conclude that in presence of multiple competing events, the ventral attention system detects semantically-relevant events, while regions of the dorsal system make use of saliency signals to select relevant locations and guide spatial orienting. PMID:27445760

  6. Does the choice of display system influence perception and visibility of clinically relevant features in digital pathology images?

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Rostang, Johan; Avanaki, Ali; Espig, Kathryn; Xthona, Albert; Cocuranu, Ioan; Parwani, Anil V.; Pantanowitz, Liron

    2014-03-01

    Digital pathology systems typically consist of a slide scanner, processing software, visualization software, and finally a workstation with display for visualization of the digital slide images. This paper studies whether digital pathology images can look different when presenting them on different display systems, and whether these visual differences can result in different perceived contrast of clinically relevant features. By analyzing a set of four digital pathology images of different subspecialties on three different display systems, it was concluded that pathology images look different when visualized on different display systems. The importance of these visual differences is elucidated when they are located in areas of the digital slide that contain clinically relevant features. Based on a calculation of dE2000 differences between background and clinically relevant features, it was clear that perceived contrast of clinically relevant features is influenced by the choice of display system. Furthermore, it seems that the specific calibration target chosen for the display system has an important effect on the perceived contrast of clinically relevant features. Preliminary results suggest that calibrating to DICOM GSDF calibration performed slightly worse than sRGB, while a new experimental calibration target CSDF performed better than both DICOM GSDF and sRGB. This result is promising as it suggests that further research work could lead to better definition of an optimized calibration target for digital pathology images resulting in a positive effect on clinical performance.

  7. The Influence of Individual Differences on Diagrammatic Communication and Problem Representation

    ERIC Educational Resources Information Center

    King, Laurel A.

    2009-01-01

    Understanding the user and customizing the interface to augment cognition and usability are goals of human computer interaction research and design. Yet, little is known about the influence of individual visual-verbal information presentation preferences on visual navigation and screen element usage. If consistent differences in visual navigation…

  8. Harmonic context influences pitch class equivalence judgments through gestalt and congruency effects.

    PubMed

    Slana, Anka; Repovš, Grega; Fitch, W Tecumseh; Gingras, Bruno

    2016-05-01

    The context in which a stimulus is presented shapes the way it is processed. This effect has been studied extensively in the field of visual perception. Our understanding of how context affects the processing of auditory stimuli is, however, rather limited. Western music is primarily built on melodies (succession of pitches) typically accompanied by chords (harmonic context), which provides a natural template for the study of context effects in auditory processing. Here, we investigated whether pitch class equivalence judgments of tones are affected by the harmonic context within which the target tones are embedded. Nineteen musicians and 19 non-musicians completed a change detection task in which they were asked to determine whether two successively presented target tones, heard either in isolation or with a chordal accompaniment (same or different chords), belonged to the same pitch class. Both musicians and non-musicians were most accurate when the chords remained the same, less so in the absence of chordal accompaniment, and least when the chords differed between both target tones. Further analysis investigating possible mechanisms underpinning these effects of harmonic context on task performance revealed that both a change in gestalt (change in either chord or pitch class), as well as incongruency between change in target tone pitch class and change in chords, led to reduced accuracy and longer reaction times. Our results demonstrate that, similarly to visual processing, auditory processing is influenced by gestalt and congruency effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Visual and proprioceptive interaction in patients with bilateral vestibular loss☆

    PubMed Central

    Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564

  10. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    PubMed

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.

  11. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  12. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  13. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    PubMed Central

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  14. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    PubMed

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  15. Processing of food pictures: influence of hunger, gender and calorie content.

    PubMed

    Frank, Sabine; Laharnar, Naima; Kullmann, Stephanie; Veit, Ralf; Canova, Carlos; Hegner, Yiwen Li; Fritsche, Andreas; Preissl, Hubert

    2010-09-02

    In most cases obesity, a major risk factor for diabetes mellitus type 2 and other associated chronic diseases, is generated by excessive eating. For a better understanding of eating behavior, it is necessary to determine how it is modulated by factors such as the calorie content of food, satiety and gender. Twelve healthy normal weighted participants (six female) were investigated in a functional magnetic resonance imaging (fMRI) study. In order to prevent the influence of social acceptability, an implicit one-back task was chosen for stimulus presentation. We presented food (high- and low-caloric) and non-food pictures in a block design and subjects had to indicate by button press whether two consecutive pictures were the same or not. Each subject performed the task in a hungry and satiated state on two different days. High-caloric pictures compared to low-caloric pictures led to increased activity in food processing and reward related areas, like the orbitofrontal and the insular cortex. In addition, we found activation differences in visual areas (occipital lobe), despite the fact that the stimuli were matched for their physical features. Detailed investigation also revealed gender specific effects in the fusiform gyrus. Women showed higher activation in the fusiform gyrus while viewing high-caloric pictures in the hungry state. This study shows that the calorie content of food pictures modulates the activation of brain areas related to reward processing and even early visual areas. In addition, satiation seems to influence the processing of food pictures differently in men and women. Even though an implicit task was used, activation differences could also be observed in the orbitofrontal cortex, known to be activated during explicit stimulation with food related stimuli. 2010 Elsevier B.V. All rights reserved.

  16. Simultaneous selection by object-based attention in visual and frontal cortex

    PubMed Central

    Pooresmaeili, Arezoo; Poort, Jasper; Roelfsema, Pieter R.

    2014-01-01

    Models of visual attention hold that top-down signals from frontal cortex influence information processing in visual cortex. It is unknown whether situations exist in which visual cortex actively participates in attentional selection. To investigate this question, we simultaneously recorded neuronal activity in the frontal eye fields (FEF) and primary visual cortex (V1) during a curve-tracing task in which attention shifts are object-based. We found that accurate performance was associated with similar latencies of attentional selection in both areas and that the latency in both areas increased if the task was made more difficult. The amplitude of the attentional signals in V1 saturated early during a trial, whereas these selection signals kept increasing for a longer time in FEF, until the moment of an eye movement, as if FEF integrated attentional signals present in early visual cortex. In erroneous trials, we observed an interareal latency difference because FEF selected the wrong curve before V1 and imposed its erroneous decision onto visual cortex. The neuronal activity in visual and frontal cortices was correlated across trials, and this trial-to-trial coupling was strongest for the attended curve. These results imply that selective attention relies on reciprocal interactions within a large network of areas that includes V1 and FEF. PMID:24711379

  17. Reward associations impact both iconic and visual working memory.

    PubMed

    Infanti, Elisa; Hickey, Clayton; Turatto, Massimo

    2015-02-01

    Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  19. The hows and whys of face memory: level of construal influences the recognition of human faces

    PubMed Central

    Wyer, Natalie A.; Hollins, Timothy J.; Pahl, Sabine; Roper, Jean

    2015-01-01

    Three experiments investigated the influence of level of construal (i.e., the interpretation of actions in terms of their meaning or their details) on different stages of face memory. We employed a standard multiple-face recognition paradigm, with half of the faces inverted at test. Construal level was manipulated prior to recognition (Experiment 1), during study (Experiment 2) or both (Experiment 3). The results support a general advantage for high-level construal over low-level construal at both study and at test, and suggest that matching processing style between study and recognition has no advantage. These experiments provide additional evidence in support of a link between semantic processing (i.e., construal) and visual (i.e., face) processing. We conclude with a discussion of implications for current theories relating to both construal and face processing. PMID:26500586

  20. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    PubMed

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

Top