Sample records for abstract visual stimuli

  1. Developmental Changes in Visual Scanning of Dynamic Faces and Abstract Stimuli in Infants: A Longitudinal Study

    ERIC Educational Resources Information Center

    Hunnius, Sabine; Geuze, Reint H.

    2004-01-01

    The characteristics of scanning patterns between the ages of 6 and 26 weeks were investigated through repeated assessments of 10 infants. Eye movements were recorded using a corneal-reflection system while the infants looked at 2 dynamic stimuli: the naturally moving face of their mother and an abstract stimulus. Results indicated that the way…

  2. The primate amygdala represents the positive and negative value of visual stimuli during learning

    PubMed Central

    Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel

    2008-01-01

    Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160

  3. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    PubMed

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  4. Neural responses to salient visual stimuli.

    PubMed Central

    Morris, J S; Friston, K J; Dolan, R J

    1997-01-01

    The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546

  5. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  6. Computer programming for generating visual stimuli.

    PubMed

    Bukhari, Farhan; Kurylo, Daniel D

    2008-02-01

    Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.

  7. Lateral eye-movement responses to visual stimuli.

    PubMed

    Wilbur, M P; Roberts-Wilbur, J

    1985-08-01

    The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.

  8. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli

    PubMed Central

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542

  9. Endogenous Sequential Cortical Activity Evoked by Visual Stimuli

    PubMed Central

    Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael

    2015-01-01

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915

  10. Effects of Visual and Verbal Stimuli on Children's Learning of Concrete and Abstract Prose.

    ERIC Educational Resources Information Center

    Hannafin, Michael J.; Carey, James O.

    A total of 152 fourth grade students participated in a study examining the effects of visual-only, verbal-only, and combined audiovisual prose presentations and different elaboration strategy conditions on student learning of abstract and concrete prose. The students saw and/or heard a short animated story, during which they were instructed to…

  11. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  12. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  13. Positive mood broadens visual attention to positive stimuli.

    PubMed

    Wadlinger, Heather A; Isaacowitz, Derek M

    2006-03-01

    In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.

  14. Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex

    PubMed Central

    Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na

    2015-01-01

    The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604

  15. Trained and Derived Relations with Pictures versus Abstract Stimuli as Nodes

    ERIC Educational Resources Information Center

    Arntzen, Erik; Lian, Torunn

    2010-01-01

    Earlier studies have shown divergent results concerning the use of familiar picture stimuli in demonstration of equivalence. In the current experiment, we trained 16 children to form three 3-member classes in a many-to-one training structure. Half of the participants were exposed first to a condition with all abstract stimuli and then to a…

  16. Visual stimuli and written production of deaf signers.

    PubMed

    Jacinto, Laís Alves; Ribeiro, Karen Barros; Soares, Aparecido José Couto; Cárnio, Maria Silvia

    2012-01-01

    To verify the interference of visual stimuli in written production of deaf signers with no complaints regarding reading and writing. The research group consisted of 12 students with education between the 4th and 5th grade of elementary school, with severe or profound sensorineural hearing loss, users of LIBRAS and with alphabetical writing level. The evaluation was performed with pictures in a logical sequence and an action picture. The analysis used the communicative competence criteria. There were no differences in the writing production of the subjects for both stimuli. In all texts there was no title and punctuation, verbs were in the infinitive mode, there was lack of cohesive links and inclusion of created words. The different visual stimuli did not affect the production of texts.

  17. A test of the symbol interdependency hypothesis with both concrete and abstract stimuli.

    PubMed

    Malhi, Simritpal Kaur; Buchanan, Lori

    2018-01-01

    In Experiment 1, the symbol interdependency hypothesis was tested with both concrete and abstract stimuli. Symbolic (i.e., semantic neighbourhood distance) and embodied (i.e., iconicity) factors were manipulated in two tasks-one that tapped symbolic relations (i.e., semantic relatedness judgment) and another that tapped embodied relations (i.e., iconicity judgment). Results supported the symbol interdependency hypothesis in that the symbolic factor was recruited for the semantic relatedness task and the embodied factor was recruited for the iconicity task. Across tasks, and especially in the iconicity task, abstract stimuli resulted in shorter RTs. This finding was in contrast to the concreteness effect where concrete words result in shorter RTs. Experiment 2 followed up on this finding by replicating the iconicity task from Experiment 1 in an ERP paradigm. Behavioural results continued to show a reverse concreteness effect with shorter RTs for abstract stimuli. However, ERP results paralleled the N400 and anterior N700 concreteness effects found in the literature, with more negative amplitudes for concrete stimuli.

  18. A test of the symbol interdependency hypothesis with both concrete and abstract stimuli

    PubMed Central

    Buchanan, Lori

    2018-01-01

    In Experiment 1, the symbol interdependency hypothesis was tested with both concrete and abstract stimuli. Symbolic (i.e., semantic neighbourhood distance) and embodied (i.e., iconicity) factors were manipulated in two tasks—one that tapped symbolic relations (i.e., semantic relatedness judgment) and another that tapped embodied relations (i.e., iconicity judgment). Results supported the symbol interdependency hypothesis in that the symbolic factor was recruited for the semantic relatedness task and the embodied factor was recruited for the iconicity task. Across tasks, and especially in the iconicity task, abstract stimuli resulted in shorter RTs. This finding was in contrast to the concreteness effect where concrete words result in shorter RTs. Experiment 2 followed up on this finding by replicating the iconicity task from Experiment 1 in an ERP paradigm. Behavioural results continued to show a reverse concreteness effect with shorter RTs for abstract stimuli. However, ERP results paralleled the N400 and anterior N700 concreteness effects found in the literature, with more negative amplitudes for concrete stimuli. PMID:29590121

  19. Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.

    PubMed

    Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor

    2015-04-01

    Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.

  20. Subliminal perception of complex visual stimuli.

    PubMed

    Ionescu, Mihai Radu

    2016-01-01

    Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.

  1. Do visually salient stimuli reduce children's risky decisions?

    PubMed

    Schwebel, David C; Lucas, Elizabeth K; Pearson, Alana

    2009-09-01

    Children tend to overestimate their physical abilities, and that tendency is related to risk for unintentional injury. This study tested whether or not children estimate their physical ability differently when exposed to stimuli that were highly visually salient due to fluorescent coloring. Sixty-nine 6-year-olds judged physical ability to complete laboratory-based physical tasks. Half judged ability using tasks that were painted black; the other half judged the same tasks, but the stimuli were striped black and fluorescent lime-green. Results suggest the two groups judged similarly, but children took longer to judge perceptually ambiguous tasks when those tasks were visually salient. In other words, visual salience increased decision-making time but not accuracy of judgment. These findings held true after controlling for demographic and temperament characteristics.

  2. Sex Differences in Response to Visual Sexual Stimuli: A Review

    PubMed Central

    Rupp, Heather A.; Wallen, Kim

    2009-01-01

    This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311

  3. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    PubMed

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  4. Gestalt Perceptual Organization of Visual Stimuli Captures Attention Automatically: Electrophysiological Evidence

    PubMed Central

    Marini, Francesco; Marzi, Carlo A.

    2016-01-01

    The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555

  5. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    PubMed

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Interpersonal touch suppresses visual processing of aversive stimuli

    PubMed Central

    Kawamichi, Hiroaki; Kitada, Ryo; Yoshihara, Kazufumi; Takahashi, Haruka K.; Sadato, Norihiro

    2015-01-01

    Social contact is essential for survival in human society. A previous study demonstrated that interpersonal contact alleviates pain-related distress by suppressing the activity of its underlying neural network. One explanation for this is that attention is shifted from the cause of distress to interpersonal contact. To test this hypothesis, we conducted a functional MRI (fMRI) study wherein eight pairs of close female friends rated the aversiveness of aversive and non-aversive visual stimuli under two conditions: joining hands either with a rubber model (rubber-hand condition) or with a close friend (human-hand condition). Subsequently, participants rated the overall comfortableness of each condition. The rating result after fMRI indicated that participants experienced greater comfortableness during the human-hand compared to the rubber-hand condition, whereas aversiveness ratings during fMRI were comparable across conditions. The fMRI results showed that the two conditions commonly produced aversive-related activation in both sides of the visual cortex (including V1, V2, and V5). An interaction between aversiveness and hand type showed rubber-hand-specific activation for (aversive > non-aversive) in other visual areas (including V1, V2, V3, and V4v). The effect of interpersonal contact on the processing of aversive stimuli was negatively correlated with the increment of attentional focus to aversiveness measured by a pain-catastrophizing scale. These results suggest that interpersonal touch suppresses the processing of aversive visual stimuli in the occipital cortex. This effect covaried with aversiveness-insensitivity, such that aversive-insensitive individuals might require a lesser degree of attentional capture to aversive-stimulus processing. As joining hands did not influence the subjective ratings of aversiveness, interpersonal touch may operate by redirecting excessive attention away from aversive characteristics of the stimuli. PMID:25904856

  7. Brain response to visual sexual stimuli in homosexual pedophiles.

    PubMed

    Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke

    2008-01-01

    The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men.

  8. Brain response to visual sexual stimuli in homosexual pedophiles

    PubMed Central

    Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke

    2008-01-01

    Objective The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. Method A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. Results In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Conclusions Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men. PMID:18197269

  9. Attentional gain and processing capacity limits predict the propensity to neglect unexpected visual stimuli.

    PubMed

    Papera, Massimiliano; Richards, Anne

    2016-05-01

    Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.

  10. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Selective attention determines emotional responses to novel visual stimuli.

    PubMed

    Raymond, Jane E; Fenske, Mark J; Tavassoli, Nader T

    2003-11-01

    Distinct complex brain systems support selective attention and emotion, but connections between them suggest that human behavior should reflect reciprocal interactions of these systems. Although there is ample evidence that emotional stimuli modulate attentional processes, it is not known whether attention influences emotional behavior. Here we show that evaluation of the emotional tone (cheery/dreary) of complex but meaningless visual patterns can be modulated by the prior attentional state (attending vs. ignoring) used to process each pattern in a visual selection task. Previously ignored patterns were evaluated more negatively than either previously attended or novel patterns. Furthermore, this emotional devaluation of distracting stimuli was robust across different emotional contexts and response scales. Finding that negative affective responses are specifically generated for ignored stimuli points to a new functional role for attention and elaborates the link between attention and emotion. This finding also casts doubt on the conventional marketing wisdom that any exposure is good exposure.

  12. Characterizing visual asymmetries in contrast perception using shaded stimuli.

    PubMed

    Chacón, José; Castellanos, Miguel Ángel; Serrano-Pedraza, Ignacio

    2015-01-01

    Previous research has shown a visual asymmetry in shaded stimuli where the perceived contrast depended on the polarity of their dark and light areas (Chacón, 2004). In particular, circles filled out with a top-dark luminance ramp were perceived with higher contrast than top-light ones although both types of stimuli had the same physical contrast. Here, using shaded stimuli, we conducted four experiments in order to find out if the perceived contrast depends on: (a) the contrast level, (b) the type of shading (continuous vs. discrete) and its degree of perceived three-dimensionality, (c) the orientation of the shading, and (d) the sign of the perceived contrast alterations. In all experiments the observers' tasks were to equate the perceived contrast of two sets of elements (usually shaded with opposite luminance polarity), in order to determine the subjective equality point. Results showed that (a) there is a strong difference in perceived contrast between circles filled out with luminance ramp top-dark and top-light that is similar for different contrast levels; (b) we also found asymmetries in contrast perception with different shaded stimuli, and this asymmetry was not related with the perceived three-dimensionality but with the type of shading, being greater for continuous-shading stimuli; (c) differences in perceived contrast varied with stimulus orientation, showing the maximum difference on vertical axis with a left bias consistent with the bias found in previous studies that used visual-search tasks; and (d) asymmetries are consistent with an attenuation in perceived contrast that is selective for top-light vertically-shaded stimuli.

  13. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  14. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  15. Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Waytowich, Nicholas R.; Krusienski, Dean J.

    2015-06-01

    Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating

  16. The Bank of Standardized Stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research.

    PubMed

    Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin

    2010-05-24

    There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.

  17. The Bank of Standardized Stimuli (BOSS), a New Set of 480 Normative Photos of Objects to Be Used as Visual Stimuli in Cognitive Research

    PubMed Central

    Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin

    2010-01-01

    There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245

  18. Locomotion Enhances Neural Encoding of Visual Stimuli in Mouse V1

    PubMed Central

    2017-01-01

    Neurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are thought to be initiated by projections from the mesencephalic locomotor region, the latter through a disinhibitory circuit in V1. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a threefold to fivefold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving. SIGNIFICANCE STATEMENT This paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of 3 the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all

  19. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  20. Brain activation by visual erotic stimuli in healthy middle aged males.

    PubMed

    Kim, S W; Sohn, D W; Cho, Y-H; Yang, W S; Lee, K-U; Juh, R; Ahn, K-J; Chung, Y-A; Han, S-I; Lee, K H; Lee, C U; Chae, J-H

    2006-01-01

    The objective of the present study was to identify brain centers, whose activity changes are related to erotic visual stimuli in healthy, heterosexual, middle aged males. Ten heterosexual, right-handed males with normal sexual function were entered into the present study (mean age 52 years, range 46-55). All potential subjects were screened over 1 h interview, and were encouraged to fill out questionnaires including the Brief Male Sexual Function Inventory. All subjects with a history of sexual arousal disorder or erectile dysfunction were excluded. We performed functional brain magnetic resonance imaging (fMRI) in male volunteers when an alternatively combined erotic and nonerotic film was played for 14 min and 9 s. The major areas of activation associated with sexual arousal to visual stimuli were occipitotemporal area, anterior cingulate gyrus, insula, orbitofrontal cortex, caudate nucleus. However, hypothalamus and thalamus were not activated. We suggest that the nonactivation of hypothalamus and thalamus in middle aged males may be responsible for the lesser physiological arousal in response to the erotic visual stimuli.

  1. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  2. Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study

    NASA Astrophysics Data System (ADS)

    Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.

    2015-08-01

    Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.

  3. Perceptual congruency of audio-visual speech affects ventriloquism with bilateral visual stimuli.

    PubMed

    Kanaya, Shoko; Yokosawa, Kazuhiko

    2011-02-01

    Many studies on multisensory processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. However, these results cannot necessarily be applied to explain our perceptual behavior in natural scenes where various signals exist within one sensory modality. We investigated the role of audio-visual syllable congruency on participants' auditory localization bias or the ventriloquism effect using spoken utterances and two videos of a talking face. Salience of facial movements was also manipulated. Results indicated that more salient visual utterances attracted participants' auditory localization. Congruent pairing of audio-visual utterances elicited greater localization bias than incongruent pairing, while previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference on auditory localization. Multisensory performance appears more flexible and adaptive in this complex environment than in previous studies.

  4. Extracting alpha band modulation during visual spatial attention without flickering stimuli using common spatial pattern.

    PubMed

    Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka

    2008-01-01

    In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.

  5. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    PubMed

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.

    PubMed

    Kokubu, Masahiro; Ando, Soichi; Oda, Shingo

    2018-01-18

    The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  9. 3D Visualizations of Abstract DataSets

    DTIC Science & Technology

    2010-08-01

    contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract

  10. Visual speech discrimination and identification of natural and synthetic consonant stimuli

    PubMed Central

    Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.

    2015-01-01

    From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading

  11. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    PubMed

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  12. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.

    PubMed

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-04-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.

  13. Visually defining and querying consistent multi-granular clinical temporal abstractions.

    PubMed

    Combi, Carlo; Oliboni, Barbara

    2012-02-01

    The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on

  14. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    PubMed

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  15. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  16. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  17. Testing memory for unseen visual stimuli in patients with extinction and spatial neglect.

    PubMed

    Vuilleumier, Patrik; Schwartz, Sophie; Clarke, Karen; Husain, Masud; Driver, Jon

    2002-08-15

    Visual extinction after right parietal damage involves a loss of awareness for stimuli in the contralesional field when presented concurrently with ipsilesional stimuli, although contralesional stimuli are still perceived if presented alone. However, extinguished stimuli can still receive some residual on-line processing, without awareness. Here we examined whether such residual processing of extinguished stimuli can produce implicit and/or explicit memory traces lasting many minutes. We tested four patients with right parietal damage and left extinction on two sessions, each including distinct study and subsequent test phases. At study, pictures of objects were shown briefly in the right, left, or both fields. Patients were asked to name them without memory instructions (Session 1) or to make an indoor/outdoor categorization and memorize them (Session 2). They extinguished most left stimuli on bilateral presentation. During the test (up to 48 min later), fragmented pictures of the previously exposed objects (or novel objects) were presented alone in either field. Patients had to identify each object and then judge whether it had previously been exposed. Identification of fragmented pictures was better for previously exposed objects that had been consciously seen and critically also for objects that had been extinguished (as compared with novel objects), with no influence of the depth of processing during study. By contrast, explicit recollection occurred only for stimuli that were consciously seen at study and increased with depth of processing. These results suggest implicit but not explicit memory for extinguished visual stimuli in parietal patients.

  18. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  19. Conditional Relations with Compound Abstract Stimuli Using a Go/No-Go Procedure

    ERIC Educational Resources Information Center

    Debert, Paula; Matos, Maria Amelia; McIlvane, William

    2007-01-01

    The aim of this study was to evaluate whether emergent conditional relations could be established with a go/no-go procedure using compound abstract stimuli. The procedure was conducted with 6 adult humans. During training, responses emitted in the presence of certain stimulus compounds (A1B1, A2B2, A3B3, B1C1, B2C2, and B3C3) were followed by…

  20. Visual arts training is linked to flexible attention to local and global levels of visual stimuli.

    PubMed

    Chamberlain, Rebecca; Wagemans, Johan

    2015-10-01

    Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  2. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  3. Shape and color conjunction stimuli are represented as bound objects in visual working memory.

    PubMed

    Luria, Roy; Vogel, Edward K

    2011-05-01

    The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  5. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  6. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    PubMed

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  7. Anchoring in Numeric Judgments of Visual Stimuli

    PubMed Central

    Langeborg, Linda; Eriksson, Mårten

    2016-01-01

    This article investigates effects of anchoring in age estimation and estimation of quantities, two tasks which to different extents are based on visual stimuli. The results are compared to anchoring in answers to classic general knowledge questions that rely on semantic knowledge. Cognitive load was manipulated to explore possible differences between domains. Effects of source credibility, manipulated by differing instructions regarding the selection of anchor values (no information regarding anchor selection, information that the anchors are randomly generated or information that the anchors are answers from an expert) on anchoring were also investigated. Effects of anchoring were large for all types of judgments but were not affected by cognitive load or by source credibility in either one of the researched domains. A main effect of cognitive load on quantity estimations and main effects of source credibility in the two visually based domains indicate that the manipulations were efficient. Implications for theoretical explanations of anchoring are discussed. In particular, because anchoring did not interact with cognitive load, the results imply that the process behind anchoring in visual tasks is predominantly automatic and unconscious. PMID:26941684

  8. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  9. The Concreteness Effect and the Bilingual Lexicon: The Impact of Visual Stimuli Attachment on Meaning Recall of Abstract L2 Words

    ERIC Educational Resources Information Center

    Farley, Andrew P.; Ramonda, Kris; Liu, Xun

    2012-01-01

    According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…

  10. Physiological and behavioral reactions elicited by simulated and real-life visual and acoustic helicopter stimuli in dairy goats

    PubMed Central

    2011-01-01

    Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239

  11. Effects of emotional valence and three-dimensionality of visual stimuli on brain activation: an fMRI study.

    PubMed

    Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro

    2013-01-01

    Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.

  12. Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli

    PubMed Central

    Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.

    2013-01-01

    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939

  13. Visual laterality in dolphins: importance of the familiarity of stimuli

    PubMed Central

    2012-01-01

    Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system

  14. Visual laterality in dolphins: importance of the familiarity of stimuli.

    PubMed

    Blois-Heulin, Catherine; Crével, Mélodie; Böye, Martin; Lemasson, Alban

    2012-01-12

    Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system and migratory behaviour.

  15. Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture

    ERIC Educational Resources Information Center

    West, Greg L.; Anderson, Adam A. K.; Pratt, Jay

    2009-01-01

    Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…

  16. Retinal image quality and visual stimuli processing by simulation of partial eye cataract

    NASA Astrophysics Data System (ADS)

    Ozolinsh, Maris; Danilenko, Olga; Zavjalova, Varvara

    2016-10-01

    Visual stimuli were demonstrated on a 4.3'' mobile phone screen inside a "Virtual Reality" adapter that allowed separation of the left and right eye visual fields. Contrast of the retina image thus can be controlled by the image on the phone screen and parallel to that at appropriate geometry by the AC voltage applied to scattering PDLC cell inside the adapter. Such optical pathway separation allows to demonstrate to both eyes spatially variant images, that after visual binocular fusion acquire their characteristic indications. As visual stimuli we used grey and different color (two opponent components to vision - red-green in L*a*b* color space) spatially periodical stimuli for left and right eyes; and with spatial content that by addition or subtraction resulted as clockwise or counter clockwise slanted Gabor gratings. We performed computer modeling with numerical addition or subtraction of signals similar to processing in brain via stimuli input decomposition in luminance and color opponency components. It revealed the dependence of the perception psychophysical equilibrium point between clockwise or counter clockwise perception of summation on one eye image contrast and color saturation, and on the strength of the retinal aftereffects. Existence of a psychophysical equilibrium point in perception of summation is only in the presence of a prior adaptation to a slanted periodical grating and at the appropriate slant orientation of adaptation grating and/or at appropriate spatial grating pattern phase according to grating nods. Actual observer perception experiments when one eye images were deteriorated by simulated cataract approved the shift of mentioned psychophysical equilibrium point on the degree of artificial cataract. We analyzed also the mobile devices stimuli emission spectra paying attention to areas sensitive to macula pigments absorption spectral maxima and blue areas where the intense irradiation can cause in abnormalities in periodic melatonin

  17. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements.

    PubMed

    Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R

    2017-07-01

    Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Enhanced Visual Cortical Activation for Emotional Stimuli is Preserved in Patients with Unilateral Amygdala Resection

    PubMed Central

    Edmiston, E. Kale; McHugo, Maureen; Dukic, Mildred S.; Smith, Stephen D.; Abou-Khalil, Bassel; Eggers, Erica

    2013-01-01

    Emotionally arousing pictures induce increased activation of visual pathways relative to emotionally neutral images. A predominant model for the preferential processing and attention to emotional stimuli posits that the amygdala modulates sensory pathways through its projections to visual cortices. However, recent behavioral studies have found intact perceptual facilitation of emotional stimuli in individuals with amygdala damage. To determine the importance of the amygdala to modulations in visual processing, we used functional magnetic resonance imaging to examine visual cortical blood oxygenation level-dependent (BOLD) signal in response to emotionally salient and neutral images in a sample of human patients with unilateral medial temporal lobe resection that included the amygdala. Adults with right (n = 13) or left (n = 5) medial temporal lobe resections were compared with demographically matched healthy control participants (n = 16). In the control participants, both aversive and erotic images produced robust BOLD signal increases in bilateral primary and secondary visual cortices relative to neutral images. Similarly, all patients with amygdala resections showed enhanced visual cortical activations to erotic images both ipsilateral and contralateral to the lesion site. All but one of the amygdala resection patients showed similar enhancements to aversive stimuli and there were no significant group differences in visual cortex BOLD responses in patients compared with controls for either aversive or erotic images. Our results indicate that neither the right nor left amygdala is necessary for the heightened visual cortex BOLD responses observed during emotional stimulus presentation. These data challenge an amygdalo-centric model of emotional modulation and suggest that non-amygdalar processes contribute to the emotional modulation of sensory pathways. PMID:23825407

  19. Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish

    PubMed Central

    Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian

    2011-01-01

    Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793

  20. Death anxiety and visual oculomotor processing of arousing stimuli in a free view setting.

    PubMed

    Wendelberg, Linda; Volden, Frode; Yildirim-Yayilgan, Sule

    2017-04-01

    The main goal of this study was to determine how death anxiety (DA) affects visual processing when confronted with arousing stimuli. A total of 26 males and females were primed with either DA or a neutral primer and were given a free view/free choice task where eye movement was measured using an eye tracker. The goal was to identify measurable/observable indicators of whether the subjects were under the influence of DA during the free view. We conducted an eye tracking study because this is an area where we believe it is possible to find observable indicators. Ultimately, we observed some changes in the visual behavior, such as a prolonged average latency, altered sensitivity to the repetition of stimuli, longer fixations, less time in saccadic activity, and fewer classifications related to focal and ambient processing, which appear to occur under the influence of DA when the subjects are confronted with arousing stimuli. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  1. Toward a hybrid brain-computer interface based on repetitive visual stimuli with missing events.

    PubMed

    Wu, Yingying; Li, Man; Wang, Jing

    2016-07-26

    Steady-state visually evoked potentials (SSVEPs) can be elicited by repetitive stimuli and extracted in the frequency domain with satisfied performance. However, the temporal information of such stimulus is often ignored. In this study, we utilized repetitive visual stimuli with missing events to present a novel hybrid BCI paradigm based on SSVEP and omitted stimulus potential (OSP). Four discs flickering from black to white with missing flickers served as visual stimulators to simultaneously elicit subject's SSVEPs and OSPs. Key parameters in the new paradigm, including flicker frequency, optimal electrodes, missing flicker duration and intervals of missing events were qualitatively discussed with offline data. Two omitted flicker patterns including missing black/white disc were proposed and compared. Averaging times were optimized with Information Transfer Rate (ITR) in online experiments, where SSVEPs and OSPs were identified using Canonical Correlation Analysis in the frequency domain and Support Vector Machine (SVM)-Bayes fusion in the time domain, respectively. The online accuracy and ITR (mean ± standard deviation) over nine healthy subjects were 79.29 ± 18.14 % and 19.45 ± 11.99 bits/min with missing black disc pattern, and 86.82 ± 12.91 % and 24.06 ± 10.95 bits/min with missing white disc pattern, respectively. The proposed BCI paradigm, for the first time, demonstrated that SSVEPs and OSPs can be simultaneously elicited in single visual stimulus pattern and recognized in real-time with satisfied performance. Besides the frequency features such as SSVEP elicited by repetitive stimuli, we found a new feature (OSP) in the time domain to design a novel hybrid BCI paradigm by adding missing events in repetitive stimuli.

  2. Unsupervised visual discrimination learning of complex stimuli: Accuracy, bias and generalization.

    PubMed

    Montefusco-Siegmund, Rodrigo; Toro, Mauricio; Maldonado, Pedro E; Aylwin, María de la L

    2018-07-01

    Through same-different judgements, we can discriminate an immense variety of stimuli and consequently, they are critical in our everyday interaction with the environment. The quality of the judgements depends on familiarity with stimuli. A way to improve the discrimination is through learning, but to this day, we lack direct evidence of how learning shapes the same-different judgments with complex stimuli. We studied unsupervised visual discrimination learning in 42 participants, as they performed same-different judgments with two types of unfamiliar complex stimuli in the absence of labeling or individuation. Across nine daily training sessions with equiprobable same and different stimuli pairs, participants increased the sensitivity and the criterion by reducing the errors with both same and different pairs. With practice, there was a superior performance for different pairs and a bias for different response. To evaluate the process underlying this bias, we manipulated the proportion of same and different pairs, which resulted in an additional proportion-induced bias, suggesting that the bias observed with equal proportions was a stimulus processing bias. Overall, these results suggest that unsupervised discrimination learning occurs through changes in the stimulus processing that increase the sensory evidence and/or the precision of the working memory. Finally, the acquired discrimination ability was fully transferred to novel exemplars of the practiced stimuli category, in agreement with the acquisition of a category specific perceptual expertise. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Visual stimuli that elicit appetitive behaviors in three morphologically distinct species of praying mantis.

    PubMed

    Prete, Frederick R; Komito, Justin L; Dominguez, Salina; Svenson, Gavin; López, LeoLin Y; Guillen, Alex; Bogdanivich, Nicole

    2011-09-01

    We assessed the differences in appetitive responses to visual stimuli by three species of praying mantis (Insecta: Mantodea), Tenodera aridifolia sinensis, Mantis religiosa, and Cilnia humeralis. Tethered, adult females watched computer generated stimuli (erratically moving disks or linearly moving rectangles) that varied along predetermined parameters. Three responses were scored: tracking, approaching, and striking. Threshold stimulus size (diameter) for tracking and striking at disks ranged from 3.5 deg (C. humeralis) to 7.8 deg (M. religiosa), and from 3.3 deg (C. humeralis) to 11.7 deg (M. religiosa), respectively. Unlike the other species which struck at disks as large as 44 deg, T. a. sinensis displayed a preference for 14 deg disks. Disks moving at 143 deg/s were preferred by all species. M. religiosa exhibited the most approaching behavior, and with T. a. sinensis distinguished between rectangular stimuli moving parallel versus perpendicular to their long axes. C. humeralis did not make this distinction. Stimulus sizes that elicited the target behaviors were not related to mantis size. However, differences in compound eye morphology may be related to species differences: C. humeralis' eyes are farthest apart, and it has an apparently narrower binocular visual field which may affect retinal inputs to movement-sensitive visual interneurons.

  4. Role of Visualization in Mathematical Abstraction: The Case of Congruence Concept

    ERIC Educational Resources Information Center

    Yilmaz, Rezan; Argun, Ziya

    2018-01-01

    Mathematical abstraction is an important process in mathematical thinking. Also, visualization is a strong tool for searching mathematical problems, giving meaning to mathematical concepts and the relationships between them. In this paper, we aim to investigate the role of visualizations in mathematical abstraction through a case study on five…

  5. Neurochemical responses to chromatic and achromatic stimuli in the human visual cortex.

    PubMed

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; Eberly, Lynn E; Deelchand, Dinesh K; Barreto, Felipe R; Mangia, Silvia

    2018-02-01

    In the present study, we aimed at determining the metabolic responses of the human visual cortex during the presentation of chromatic and achromatic stimuli, known to preferentially activate two separate clusters of neuronal populations (called "blobs" and "interblobs") with distinct sensitivity to color or luminance features. Since blobs and interblobs have different cytochrome-oxidase (COX) content and micro-vascularization level (i.e., different capacities for glucose oxidation), different functional metabolic responses during chromatic vs. achromatic stimuli may be expected. The stimuli were optimized to evoke a similar load of neuronal activation as measured by the bold oxygenation level dependent (BOLD) contrast. Metabolic responses were assessed using functional 1 H MRS at 7 T in 12 subjects. During both chromatic and achromatic stimuli, we observed the typical increases in glutamate and lactate concentration, and decreases in aspartate and glucose concentration, that are indicative of increased glucose oxidation. However, within the detection sensitivity limits, we did not observe any difference between metabolic responses elicited by chromatic and achromatic stimuli. We conclude that the higher energy demands of activated blobs and interblobs are supported by similar increases in oxidative metabolism despite the different capacities of these neuronal populations.

  6. Orienting attention in visual space by nociceptive stimuli: investigation with a temporal order judgment task based on the adaptive PSI method.

    PubMed

    Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry

    2017-07-01

    Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.

  7. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  8. Sexual preference for child and aggressive stimuli: comparison of rapists and child molesters using auditory and visual stimuli.

    PubMed

    Miner, M H; West, M A; Day, D M

    1995-06-01

    154 Ss were tested using penile plethysmography as part of intake into a voluntary inpatient sex offender treatment program. The testing protocol included slide stimuli of nude males and females in four age categories ranging from age 1 to adult; audiotaped descriptions of sexual activity with children of both genders which included fondling, sexual contact with no resistance, coercive sexual contact, sexual assault, nonsexual assault, and consensual sexual contact with an adult; videotaped depictions of rape of an adult woman, nonsexual assault of an adult woman and consensual sexual involvement with an adult woman, and audiotaped descriptions that paralleled the videotapes. The results indicated that child molesters (male victim) show a decidedly more offense related arousal profile than either child molesters (female victim) or rapists, and that the profiles of child molesters (female victim) and rapists are remarkably similar, although statistically significantly different from each other. Rapists respond significantly more to rape and nonsexual assault than either of the two child molester groups, with child molesters with female victims responding more than those with male victims. In all three groups, the highest level of noncoercive adult responding was to women, with differences among offense groups present for visual stimuli, but not in response to auditory stimuli. Overall, the patterns of results are similar whether they are based on composites across stimulus modality or on the individual stimuli.

  9. Genetically Identified Suppressed-by-Contrast Retinal Ganglion Cells Reliably Signal Self-Generated Visual Stimuli

    PubMed Central

    Tien, Nai-Wen; Pearson, James T.; Heller, Charles R.; Demas, Jay

    2015-01-01

    Spike trains of retinal ganglion cells (RGCs) are the sole source of visual information to the brain; and understanding how the ∼20 RGC types in mammalian retinae respond to diverse visual features and events is fundamental to understanding vision. Suppressed-by-contrast (SbC) RGCs stand apart from all other RGC types in that they reduce rather than increase firing rates in response to light increments (ON) and decrements (OFF). Here, we genetically identify and morphologically characterize SbC-RGCs in mice, and target them for patch-clamp recordings under two-photon guidance. We find that strong ON inhibition (glycine > GABA) outweighs weak ON excitation, and that inhibition (glycine > GABA) coincides with decreases in excitation at light OFF. These input patterns explain the suppressive spike responses of SbC-RGCs, which are observed in dim and bright light conditions. Inhibition to SbC-RGC is driven by rectified receptive field subunits, leading us to hypothesize that SbC-RGCs could signal pattern-independent changes in the retinal image. Indeed, we find that shifts of random textures matching saccade-like eye movements in mice elicit robust inhibitory inputs and suppress spiking of SbC-RGCs over a wide range of texture contrasts and spatial frequencies. Similarly, stimuli based on kinematic analyses of mouse blinking consistently suppress SbC-RGC spiking. Receiver operating characteristics show that SbC-RGCs are reliable indicators of self-generated visual stimuli that may contribute to central processing of blinks and saccades. SIGNIFICANCE STATEMENT This study genetically identifies and morphologically characterizes suppressed-by-contrast retinal ganglion cells (SbC-RGCs) in mice. Targeted patch-clamp recordings from SbC-RGCs under two-photon guidance elucidate the synaptic mechanisms mediating spike suppression to contrast steps, and reveal that SbC-RGCs respond reliably to stimuli mimicking saccade-like eye movements and blinks. The similarity of

  10. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE PAGES

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...

    2017-08-29

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  11. Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.

    Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less

  12. Taste in Art-Exposure to Histological Stains Shapes Abstract Art Preferences.

    PubMed

    Böthig, Antonia M; Hayn-Leichsenring, Gregor U

    2017-01-01

    Exposure to art increases the appreciation of artworks. Here, we showed that this effect is domain independent. After viewing images of histological stains in a lecture, ratings increased for restricted subsets of abstract art images. In contrast, a lecture on art history generally enhanced ratings for all art images presented, while a lecture on town history without any visual stimuli did not increase the ratings. Therefore, we found a domain-independent exposure effect of images of histological stains to particular abstract paintings. This finding suggests that the 'taste' for abstract art is altered by visual impressions that are presented outside of an artistic context.

  13. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  15. The Influences of Static and Interactive Dynamic Facial Stimuli on Visual Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…

  16. Parietal Activation During Retrieval of Abstract and Concrete Auditory Information

    PubMed Central

    Klostermann, Ellen C.; Kane, Ari J.M.; Shimamura, Arthur P.

    2008-01-01

    Successful memory retrieval has been associated with a neural circuit that involves prefrontal, precuneus, and posterior parietal regions. Specifically, these regions are active during recognition memory tests when items correctly identified as “old” are compared with items correctly identified as “new.” Yet, as nearly all previous fMRI studies have used visual stimuli, it is unclear whether activations in posterior regions are specifically associated with memory retrieval or if they reflect visuospatial processing. We focus on the status of parietal activations during recognition performance by testing memory for abstract and concrete nouns presented in the auditory modality with eyes closed. Successful retrieval of both concrete and abstract words was associated with increased activation in left inferior parietal regions (BA 40), similar to those observed with visual stimuli. These results demonstrate that activations in the posterior parietal cortex during retrieval cannot be attributed to bottom-up visuospatial processes but instead have a more direct relationship to memory retrieval processes. PMID:18243736

  17. Transformation of the Discriminative and Eliciting Functions of Generalized Relational Stimuli

    ERIC Educational Resources Information Center

    Dougher, Michael J.; Hamilton, Derek; Fink, Brandi; Harrington, Jennifer

    2007-01-01

    In three experiments, match-to-sample procedures were used with undergraduates to establish arbitrary relational functions for three abstract visual stimuli. In the presence of samples A, B, and C, participants were trained to select the smallest, middle, and largest member, respectively, of a series of three-comparison arrays. In Experiment 1,…

  18. Measuring Software Timing Errors in the Presentation of Visual Stimuli in Cognitive Neuroscience Experiments

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego; Matute, Helena

    2014-01-01

    Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments. PMID:24409318

  19. Spontaneous generalization of abstract multimodal patterns in young domestic chicks.

    PubMed

    Versace, Elisabetta; Spierings, Michelle J; Caffini, Matteo; Ten Cate, Carel; Vallortigara, Giorgio

    2017-05-01

    From the early stages of life, learning the regularities associated with specific objects is crucial for making sense of experiences. Through filial imprinting, young precocial birds quickly learn the features of their social partners by mere exposure. It is not clear though to what extent chicks can extract abstract patterns of the visual and acoustic stimuli present in the imprinting object, and how they combine them. To investigate this issue, we exposed chicks (Gallus gallus) to three days of visual and acoustic imprinting, using either patterns with two identical items or patterns with two different items, presented visually, acoustically or in both modalities. Next, chicks were given a choice between the familiar and the unfamiliar pattern, present in either the multimodal, visual or acoustic modality. The responses to the novel stimuli were affected by their imprinting experience, and the effect was stronger for chicks imprinted with multimodal patterns than for the other groups. Interestingly, males and females adopted a different strategy, with males more attracted by unfamiliar patterns and females more attracted by familiar patterns. Our data show that chicks can generalize abstract patterns by mere exposure through filial imprinting and that multimodal stimulation is more effective than unimodal stimulation for pattern learning.

  20. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    PubMed

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. [Changes in emotional response to visual stimuli with sexual content in drug abusers].

    PubMed

    Aguilar de Arcos, Francisco; Verdejo Garcia, Antonio; Lopez Jimenez, Angeles; Montañez Pareja, Matilde; Gomez Juarez, Encarnacion; Arraez Sanchez, Francisco; Perez Garcia, Miguel

    2008-01-01

    In a phenomenon as complex as drug dependence there is no doubt that affective and emotional aspects are involved. However, there has been little research to date on these emotional aspects, especially in specific relation to everyday affective stimuli, unrelated to drug use. In this work we investigate whether the consumption of narcotic substances causes changes in the emotional response to powerful unconditional natural stimuli, such as those of a sexual nature. To this end, I.A.P.S. images with explicit erotic content were shown to 84 drug-dependent males, in separate groups according to preferred substance. These groups' results were compared with each other and with the values obtained by non-consumers. The results indicate that drug abusers respond differently to visual stimuli with erotic content compared to non-consumers, and that there are also differences in response among consumers according to preferred substance.

  2. Peripheral visual response time to colored stimuli imaged on the horizontal meridian

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Gross, M. M.; Nylen, D.; Dawson, L. M.

    1974-01-01

    Two male observers were administered a binocular visual response time task to small (45 min arc), flashed, photopic stimuli at four dominant wavelengths (632 nm red; 583 nm yellow; 526 nm green; 464 nm blue) imaged across the horizontal retinal meridian. The stimuli were imaged at 10 deg arc intervals from 80 deg left to 90 deg right of fixation. Testing followed either prior light adaptation or prior dark adaptation. Results indicated that mean response time (RT) varies with stimulus color. RT is faster to yellow than to blue and green and slowest to red. In general, mean RT was found to increase from fovea to periphery for all four colors, with the curve for red stimuli exhibiting the most rapid positive acceleration with increasing angular eccentricity from the fovea. The shape of the RT distribution across the retina was also found to depend upon the state of light or dark adaptation. The findings are related to previous RT research and are discussed in terms of optimizing the color and position of colored displays on instrument panels.

  3. Cortical responses from adults and infants to complex visual stimuli.

    PubMed

    Schulman-Galambos, C; Galambos, R

    1978-10-01

    Event-related potentials (ERPs) time-locked to the onset of visual stimuli were extracted from the EEG of normal adult (N = 16) and infant (N = 23) subjects. Subjects were not required to make any response. Stimuli delivered to the adults were 150 msec exposures of 2 sets of colored slides projected in 4 blocks, 2 in focus and 2 out of focus. Infants received 2-sec exposures of slides showing people, colored drawings or scenes from Disneyland, as well as 2-sec illuminations of the experimenter as she played a game or of a TV screen the baby was watching. The adult ERPs showed 6 waves (N1 through P4) in the 140--600-msec range; this included a positive wave at around 350 msec that was large when the stimuli were focused and smaller when they were not. The waves in the 150--200-msec range, by contrast, steadily dropped in amplitude as the experiment progressed. The infant ERPs differed greatly from the adult ones in morphology, usually showing a positive (latency about 200 msec)--negative(5--600msec)--positive(1000msec) sequence. This ERP appeared in all the stimulus conditions; its presence or absence, furthermore, was correlated with whether or not the baby seemed interested in the stimuli. Four infants failed to produce these ERPs; an independent measure of attention to the stimuli, heart rate deceleration, was demonstrated in two of them. An electrode placed beneath the eye to monitor eye movements yielded ERPs closely resembling those derived from the scalp in most subjects; reasons are given for assigning this response to activity in the brain, probably at the frontal pole. This study appears to be one of the first to search for cognitive 'late waves' in a no-task situation. The results suggest that further work with such task-free paradigms may yield additional useful techniques for studying the ERP.

  4. Emergence of an abstract categorical code enabling the discrimination of temporally structured tactile stimuli

    PubMed Central

    Rossi-Pool, Román; Salinas, Emilio; Zainos, Antonio; Alvarez, Manuel; Vergara, José; Parga, Néstor; Romo, Ranulfo

    2016-01-01

    The problem of neural coding in perceptual decision making revolves around two fundamental questions: (i) How are the neural representations of sensory stimuli related to perception, and (ii) what attributes of these neural responses are relevant for downstream networks, and how do they influence decision making? We studied these two questions by recording neurons in primary somatosensory (S1) and dorsal premotor (DPC) cortex while trained monkeys reported whether the temporal pattern structure of two sequential vibrotactile stimuli (of equal mean frequency) was the same or different. We found that S1 neurons coded the temporal patterns in a literal way and only during the stimulation periods and did not reflect the monkeys’ decisions. In contrast, DPC neurons coded the stimulus patterns as broader categories and signaled them during the working memory, comparison, and decision periods. These results show that the initial sensory representation is transformed into an intermediate, more abstract categorical code that combines past and present information to ultimately generate a perceptually informed choice. PMID:27872293

  5. Unseen stimuli modulate conscious visual experience: evidence from inter-hemispheric summation.

    PubMed

    de Gelder, B; Pourtois, G; van Raamsdonk, M; Vroomen, J; Weiskrantz, L

    2001-02-12

    Emotional facial expression can be discriminated despite extensive lesions of striate cortex. Here we report differential performance with recognition of facial stimuli in the intact visual field depending on simultaneous presentation of congruent or incongruent stimuli in the blind field. Three experiments were based on inter-hemispheric summation. Redundant stimulation in the blind field led to shorter latencies for stimulus detection in the intact field. Recognition of the expression of a half-face expression in the intact field was faster when the other half of the face presented to the blind field had a congruent expression. Finally, responses to the expression of whole faces to the intact field were delayed for incongruent facial expressions presented in the blind field. These results indicate that the neuro-anatomical pathways (extra-striate cortical and sub-cortical) sustaining inter-hemispheric summation can operate in the absence of striate cortex.

  6. [French norms of imagery for pictures, for concrete and abstract words].

    PubMed

    Robin, Frédérique

    2006-09-01

    This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.

  7. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    PubMed Central

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  8. Perceptual category learning of photographic and painterly stimuli in rhesus macaques (Macaca mulatta) and humans

    PubMed Central

    Jensen, Greg; Terrace, Herbert

    2017-01-01

    Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270

  9. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  10. How a Visual Language of Abstract Shapes Facilitates Cultural and International Border Crossings

    ERIC Educational Resources Information Center

    Conroy, Arthur Thomas, III

    2016-01-01

    This article describes a visual language comprised of abstract shapes that has been shown to be effective in communicating prior knowledge between and within members of a small team or group. The visual language includes a set of geometric shapes and rules that guide the construction of the abstract diagrams that are the external representation of…

  11. The dissociations of visual processing of "hole" and "no-hole" stimuli: An functional magnetic resonance imaging study.

    PubMed

    Meng, Qianli; Huang, Yan; Cui, Ding; He, Lixia; Chen, Lin; Ma, Yuanye; Zhao, Xudong

    2018-05-01

    "Where to begin" is a fundamental question of vision. A "Global-first" topological approach proposed that the first step in object representation was to extract topological properties, especially whether the object had a hole or not. Numerous psychophysical studies found that the hole (closure) could be rapidly recognized by visual system as a primitive property. However, neuroimaging studies showed that the temporal lobe (IT), which lied at a late stage of ventral pathway, was involved as a dedicated region. It appeared paradoxical that IT served as a key region for processing the early component of visual information. Did there exist a distinct fast route to transit hole information to IT? We hypothesized that a fast noncortical pathway might participate in processing holes. To address this issue, a backward masking paradigm combined with functional magnetic resonance imaging (fMRI) was applied to measure neural responses to hole and no-hole stimuli in anatomically defined cortical and subcortical regions of interest (ROIs) under different visual awareness levels by modulating masking delays. For no-hole stimuli, the neural activation of cortical sites was greatly attenuated when the no-hole perception was impaired by strong masking, whereas an enhanced neural response to hole stimuli in non-cortical sites was obtained when the stimulus was rendered more invisible. The results suggested that whereas the cortical route was required to drive a perceptual response for no-hole stimuli, a subcortical route might be involved in coding the hole feature, resulting in a rapid hole perception in primitive vision.

  12. Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2018-04-12

    Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.

  13. Information from multiple modalities helps 5-month-olds learn abstract rules.

    PubMed

    Frank, Michael C; Slemmer, Jonathan A; Marcus, Gary F; Johnson, Scott P

    2009-07-01

    By 7 months of age, infants are able to learn rules based on the abstract relationships between stimuli (Marcus et al., 1999), but they are better able to do so when exposed to speech than to some other classes of stimuli. In the current experiments we ask whether multimodal stimulus information will aid younger infants in identifying abstract rules. We habituated 5-month-olds to simple abstract patterns (ABA or ABB) instantiated in coordinated looming visual shapes and speech sounds (Experiment 1), shapes alone (Experiment 2), and speech sounds accompanied by uninformative but coordinated shapes (Experiment 3). Infants showed evidence of rule learning only in the presence of the informative multimodal cues. We hypothesize that the additional evidence present in these multimodal displays was responsible for the success of younger infants in learning rules, congruent with both a Bayesian account and with the Intersensory Redundancy Hypothesis.

  14. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects

    PubMed Central

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R.; Jafari, Amir H.

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25

  15. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects.

    PubMed

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23-30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25

  16. Visual Stimuli Induce Waves of Electrical Activity in Turtle Cortex

    NASA Astrophysics Data System (ADS)

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-07-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334-337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (<5 Hz) are seen in both ongoing activity and activity induced by visual stimuli. These oscillations propagate parallel to the afferent input. Higher frequency activity, with spectral peaks near 10 and 20 Hz, is seen solely in response to stimulation. This activity consists of plane waves and spiral-like waves, as well as more complex patterns. The plane waves have an average phase gradient of ≈ π /2 radians/mm and propagate orthogonally to the low frequency waves. Our results show that large-scale differences in neuronal timing are present and persistent during visual processing.

  17. Visual stimuli induce waves of electrical activity in turtle cortex

    PubMed Central

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-01-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334–337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (<5 Hz) are seen in both ongoing activity and activity induced by visual stimuli. These oscillations propagate parallel to the afferent input. Higher frequency activity, with spectral peaks near 10 and 20 Hz, is seen solely in response to stimulation. This activity consists of plane waves and spiral-like waves, as well as more complex patterns. The plane waves have an average phase gradient of ≈π/2 radians/mm and propagate orthogonally to the low frequency waves. Our results show that large-scale differences in neuronal timing are present and persistent during visual processing. PMID:9207142

  18. Miniature Brain Decision Making in Complex Visual Environments

    DTIC Science & Technology

    2008-07-18

    release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The grantee investigated, using the honeybee ( Apis mellifera ) as a model...successful for understanding face processing in both human adults and infants. Individual honeybees ( Apis mellifera ) were trained with...for 30 bees (group 3) of the target stimuli. Bernard J, Stach S, Giurfa M (2007) Categorization of visual stimuli in the honeybee Apis mellifera

  19. Visual laterality responses to different emotive stimuli by red-capped mangabeys, Cercocebus torquatus torquatus.

    PubMed

    de Latude, Marion; Demange, Marianne; Bec, Philippe; Blois-Heulin, Catherine

    2009-01-01

    Hemispheric asymmetry in emotional perception has been put forward by different theories as the right hemisphere theory or the valence theory. But no consensus was found about the role played by both hemispheres. So, in order to test the different theories, we investigated preferential use of one eye in red-capped mangabeys, at the individual as well as at the group level. In this study we investigated the influence of the emotional value of stimuli on the direction and strength of visual preference of 14 red-capped mangabeys. Temporal stability of the bias of use of a given eye was evaluated by comparing our current results to those obtained 2.5 months previously. Two experimental devices, a tube and a box, tested five different stimuli: four food types varying in palatability and a neutral stimulus. The subjects' food preferences were evaluated before testing the laterality. The mangabeys used their left eyes predominantly at the group level for the tube task. The majority of the subjects showed a visual preference at the individual level for the box task, but this bias was not present at the group level. As the palatability of the stimuli increased, the number of lateralized subjects and the number of subjects using preferentially their left eye increased. Similarly, the strength of laterality was related to food preference. Strength of laterality was significantly higher for subjects using their left eye than for subjects using their right eye. Preferential use of a given eye was stable over short periods 2.5 months later. Our data agree with reports on visual laterality for other species. Our results support the valence theory of a hemispheric sharing of control of emotions in relation to their emotional value.

  20. Population Response Profiles in Early Visual Cortex Are Biased in Favor of More Valuable Stimuli

    PubMed Central

    Saproo, Sameer

    2010-01-01

    Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population

  1. Response time to colored stimuli in the full visual field

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Dawson, L. M.; Galvan, T.; Reid, L. M.

    1975-01-01

    Peripheral visual response time was measured in seven dark adapted subjects to the onset of small (45' arc diam), brief (50 msec), colored (blue, yellow, green, red) and white stimuli imaged at 72 locations within their binocular field of view. The blue, yellow, and green stimuli were matched for brightness at about 2.6 sub log 10 units above their absolute light threshold, and they appeared at an unexpected time and location. These data were obtained to provide response time and no-response data for use in various design disciplines involving instrument panel layout. The results indicated that the retina possesses relatively concentric regions within each of which mean response time can be expected to be of approximately the same duration. These regions are centered near the fovea and extend farther horizontally than vertically. Mean foveal response time was fastest for yellow and slowest for blue. Three and one-half percent of the total 56,410 trials presented resulted in no-responses. Regardless of stimulus color, the lowest percentage of no-responses occurred within 30 deg arc from the fovea and the highest within 40 deg to 80 deg arc below the fovea.

  2. Multisensory integration and the concert experience: An overview of how visual stimuli can affect what we hear

    NASA Astrophysics Data System (ADS)

    Hyde, Jerald R.

    2004-05-01

    It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.

  3. An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.

    PubMed

    Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan

    2015-08-15

    This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  5. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  6. Stress Sensitive Healthy Females Show Less Left Amygdala Activation in Response to Withdrawal-Related Visual Stimuli under Passive Viewing Conditions

    ERIC Educational Resources Information Center

    Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert

    2012-01-01

    The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…

  7. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  8. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    PubMed

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  9. Binocular coordination in response to stereoscopic stimuli

    NASA Astrophysics Data System (ADS)

    Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.

    2009-02-01

    Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.

  10. Neural correlates of visualizations of concrete and abstract words in preschool children: a developmental embodied approach

    PubMed Central

    D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando

    2015-01-01

    The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying

  11. Top-down and bottom-up competition in visual stimuli processing.

    PubMed

    Ligeza, Tomasz S; Tymorek, Agnieszka D; Wyczesany, Mirosław

    2017-01-01

    Limited attention capacity results that not all the stimuli present in the visual field are equally processed. While processing of salient stimuli is automatically boosted by bottom‑up attention, processing of task‑relevant stimuli can be boosted volitionally by top‑down attention. Usually, both top‑down and bottom‑up influences are present simultaneously, which creates a competition between these two types of attention. We examined this competition using both behavioral and electrophysiological measures. Participants responded to letters superimposed on background pictures. We assumed that responding to different conditions of the letter task engages top‑down attention to different extent, whereas processing of background pictures of varying salience engages bottom‑up attention to different extent. To check how manipulation of top‑down attention influences bottom‑up processing, we measured evoked response potentials (ERPs) in response to pictures (engaging mostly bottom‑up attention) during three conditions of a letter task (different levels of top‑down engagement). Conversely, to check how manipulation of bottom‑up attention influences top‑down processing, we measured ERP responses for letters (engaging mostly top‑down attention) while manipulating the salience of background pictures (different levels of bottom‑up engagement). The correctness and reaction times in response to letters were also analyzed. As expected, most of the ERPs and behavioral measures revealed a trade‑off between both types of processing: a decrease of bottom‑up processing was associated with an increase of top‑down processing and, similarly, a decrease of top‑down processing was associated with an increase in bottom‑up processing. Results proved competition between the two types of attentions.

  12. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  13. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  14. The relationship between age and brain response to visual erotic stimuli in healthy heterosexual males.

    PubMed

    Seo, Y; Jeong, B; Kim, J-W; Choi, J

    2010-01-01

    The various changes of sexuality, including decreased sexual desire and erectile dysfunction, are also accompanied with aging. To understand the effect of aging on sexuality, we explored the relationship between age and the visual erotic stimulation-related brain response in sexually active male subjects. Twelve healthy, heterosexual male subjects (age 22-47 years) were recorded the functional magnetic resonance imaging (fMRI) signals of their brain activation elicited by passive viewing erotic (ERO), happy-faced (HA) couple, food and nature pictures. Mixed effect analysis and correlation analysis were performed to investigate the relationship between the age and the change of brain activity elicited by erotic stimuli. Our results showed age was positively correlated with the activation of right occipital fusiform gyrus and amygdala, and negatively correlated with the activation of right insula and inferior frontal gyrus. These findings suggest age might be related with functional decline in brain regions being involved in both interoceptive sensation and prefrontal modulation while it is related with the incremental activity of the brain region for early processing of visual emotional stimuli in sexually healthy men.

  15. Global/local processing of hierarchical visual stimuli in a conflict-choice task by capuchin monkeys (Sapajus spp.).

    PubMed

    Truppa, Valentina; Carducci, Paola; De Simone, Diego Antonio; Bisazza, Angelo; De Lillo, Carlo

    2017-03-01

    In the last two decades, comparative research has addressed the issue of how the global and local levels of structure of visual stimuli are processed by different species, using Navon-type hierarchical figures, i.e. smaller local elements that form larger global configurations. Determining whether or not the variety of procedures adopted to test different species with hierarchical figures are equivalent is of crucial importance to ensure comparability of results. Among non-human species, global/local processing has been extensively studied in tufted capuchin monkeys using matching-to-sample tasks with hierarchical patterns. Local dominance has emerged consistently in these New World primates. In the present study, we assessed capuchins' processing of hierarchical stimuli with a method frequently adopted in studies of global/local processing in non-primate species: the conflict-choice task. Different from the matching-to-sample procedure, this task involved processing local and global information retained in long-term memory. Capuchins were trained to discriminate between consistent hierarchical stimuli (similar global and local shape) and then tested with inconsistent hierarchical stimuli (different global and local shapes). We found that capuchins preferred the hierarchical stimuli featuring the correct local elements rather than those with the correct global configuration. This finding confirms that capuchins' local dominance, typically observed using matching-to-sample procedures, is also expressed as a local preference in the conflict-choice task. Our study adds to the growing body of comparative studies on visual grouping functions by demonstrating that the methods most frequently used in the literature on global/local processing produce analogous results irrespective of extent of the involvement of memory processes.

  16. Visual acuity measured with luminance-modulated and contrast-modulated noise letter stimuli in young adults and adults above 50 years old

    PubMed Central

    Woi, Pui Juan; Kaur, Sharanjeet; Waugh, Sarah J.; Hairol, Mohd Izzuddin

    2016-01-01

    The human visual system is sensitive in detecting objects that have different luminance level from their background, known as first-order or luminance-modulated (LM) stimuli. We are also able to detect objects that have the same mean luminance as their background, only differing in contrast (or other attributes). Such objects are known as second-order or contrast-modulated (CM), stimuli. CM stimuli are thought to be processed in higher visual areas compared to LM stimuli, and may be more susceptible to ageing. We compared visual acuities (VA) of five healthy older adults (54.0±1.83 years old) and five healthy younger adults (25.4±1.29 years old) with LM and CM letters under monocular and binocular viewing. For monocular viewing, age had no effect on VA [F(1, 8)= 2.50, p> 0.05]. However, there was a significant main effect of age on VA under binocular viewing [F(1, 8)= 5.67, p< 0.05].  Binocular VA with CM letters in younger adults was approximately two lines better than that in older adults. For LM, binocular summation ratios were similar for older (1.16±0.21) and younger (1.15±0.06) adults. For CM, younger adults had higher binocular summation ratio (1.39±0.08) compared to older adults (1.12±0.09). Binocular viewing improved VA with LM letters for both groups similarly. However, in older adults, binocular viewing did not improve VA with CM letters as much as in younger adults. This could reflect a decline of higher visual areas due to ageing process, most likely higher than V1, which may be missed if measured with luminance-based stimuli alone. PMID:28184281

  17. Virtual reality stimuli for force platform posturography.

    PubMed

    Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko

    2002-01-01

    People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.

  18. Visual attention to meaningful stimuli by 1- to 3-year olds: implications for the measurement of memory.

    PubMed

    Hayne, Harlene; Jaeger, Katja; Sonne, Trine; Gross, Julien

    2016-11-01

    The visual recognition memory (VRM) paradigm has been widely used to measure memory during infancy and early childhood; it has also been used to study memory in human and nonhuman adults. Typically, participants are familiarized with stimuli that have no special significance to them. Under these conditions, greater attention to the novel stimulus during the test (i.e., novelty preference) is used as the primary index of memory. Here, we took a novel approach to the VRM paradigm and tested 1-, 2-, and 3-year olds using photos of meaningful stimuli that were drawn from the participants' own environment (e.g., photos of their mother, father, siblings, house). We also compared their performance to that of participants of the same age who were tested in an explicit pointing version of the VRM task. Two- and 3-year olds exhibited a strong familiarity preference for some, but not all, of the meaningful stimuli; 1-year olds did not. At no age did participants exhibit the kind of novelty preference that is commonly used to define memory in the VRM task. Furthermore, when compared to pointing, looking measures provided a rough approximation of recognition memory, but in some instances, the looking measure underestimated retention. The use of meaningful stimuli raise important questions about the way in which visual attention is interpreted in the VRM paradigm, and may provide new opportunities to measure memory during infancy and early childhood. © 2016 Wiley Periodicals, Inc.

  19. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    ERIC Educational Resources Information Center

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  20. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  1. Selective attention to visual compound stimuli in squirrel monkeys (Saimiri sciureus).

    PubMed

    Ploog, Bertram O

    2011-05-01

    Five squirrel monkeys served under a simultaneous discrimination paradigm with visual compound stimuli that allowed measurement of excitatory and inhibitory control exerted by individual stimulus components (form and luminance/"color"), which could not be presented in isolation (i.e., form could not be presented without color). After performance exceeded a criterion of 75% correct during training, unreinforced test trials with stimuli comprising recombined training stimulus components were interspersed while the overall reinforcement rate remained constant for training and testing. The training-testing series was then repeated with reversed reinforcement contingencies. The findings were that color acquired greater excitatory control than form under the original condition, that no such difference was found for the reversal condition or for inhibitory control under either condition, and that overall inhibitory control was less pronounced than excitatory control. The remarkably accurate performance throughout suggested that a forced 4-s delay between the stimulus presentation and the opportunity to respond was effective in reducing "impulsive" responding, which has implications for suppressing impulsive responding in children with autism and with attention deficit disorder. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Time- and Space-Order Effects in Timed Discrimination of Brightness and Size of Paired Visual Stimuli

    ERIC Educational Resources Information Center

    Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake

    2012-01-01

    Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…

  3. Visual Categorization of Natural Movies by Rats

    PubMed Central

    Vinken, Kasper; Vermaercke, Ben

    2014-01-01

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598

  4. Temporal attention for visual food stimuli in restrained eaters.

    PubMed

    Neimeijer, Renate A M; de Jong, Peter J; Roefs, Anne

    2013-05-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require less attentional resources to enter awareness. Once a food stimulus has captured attention, it may be preferentially processed and granted prioritized access to limited cognitive resources. This might help explain why restrained eaters often fail in their attempts to restrict their food intake. A Rapid Serial Visual Presentation task consisting of dual and single target trials with food and neutral pictures as targets and/or distractors was administered to restrained (n=40) and unrestrained (n=40) eaters to study temporal attentional bias. Results indicated that (1) food cues did not diminish the attentional blink in restrained eaters when presented as second target; (2) specifically restrained eaters showed an interference effect of identifying food targets on the identification of preceding neutral targets; (3) for both restrained and unrestrained eaters, food cues enhanced the attentional blink; (4) specifically in restrained eaters, food distractors elicited an attention blink in the single target trials. In restrained eaters, food cues get prioritized access to limited cognitive resources, even if this processing priority interferes with their current goals. This temporal attentional bias for food stimuli might help explain why restrained eaters typically have difficulties maintaining their diet rules. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Local contextual processing of abstract and meaningful real-life images in professional athletes.

    PubMed

    Fogelson, Noa; Fernandez-Del-Olmo, Miguel; Acero, Rafael Martín

    2012-05-01

    We investigated the effect of abstract versus real-life meaningful images from sports on local contextual processing in two groups of professional athletes. Local context was defined as the occurrence of a short predictive series of stimuli occurring before delivery of a target event. EEG was recorded in 10 professional basketball players and 9 professional athletes of individual sports during three sessions. In each session, a different set of visual stimuli were presented: triangles facing left, up, right, or down; four images of a basketball player throwing a ball; four images of a baseball player pitching a baseball. Stimuli consisted of 15 % targets and 85 % of equal numbers of three types of standards. Recording blocks consisted of targets preceded by randomized sequences of standards and by sequences including a predictive sequence signaling the occurrence of a subsequent target event. Subjects pressed a button in response to targets. In all three sessions, reaction times and peak P3b latencies were shorter for predicted targets compared with random targets, the last most informative stimulus of the predictive sequence induced a robust P3b, and N2 amplitude was larger for random targets compared with predicted targets. P3b and N2 peak amplitudes were larger in the professional basketball group in comparison with professional athletes of individual sports, across the three sessions. The findings of this study suggest that local contextual information is processed similarly for abstract and for meaningful images and that professional basketball players seem to allocate more attentional resources in the processing of these visual stimuli.

  6. A low-cost and versatile system for projecting wide-field visual stimuli within fMRI scanners

    PubMed Central

    Greco, V.; Frijia, F.; Mikellidou, K.; Montanaro, D.; Farini, A.; D’Uva, M.; Poggi, P.; Pucci, M.; Sordini, A.; Morrone, M. C.; Burr, D. C.

    2016-01-01

    We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80°). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields. PMID:26092392

  7. Synergistic interaction between baclofen administration into the median raphe nucleus and inconsequential visual stimuli on investigatory behavior of rats

    PubMed Central

    Vollrath-Smith, Fiori R.; Shin, Rick

    2011-01-01

    Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820

  8. Heart rate reactivity associated to positive and negative food and non-food visual stimuli.

    PubMed

    Kuoppa, Pekka; Tarvainen, Mika P; Karhunen, Leila; Narvainen, Johanna

    2016-08-01

    Using food as a stimuli is known to cause multiple psychophysiological reactions. Heart rate variability (HRV) is common tool for assessing physiological reactions in autonomic nervous system. However, the findings in HRV related to food stimuli have not been consistent. In this paper the quick changes in HRV related to positive and negative food and non-food visual stimuli are investigated. Electrocardiogram (ECG) was measured from 18 healthy females while being stimulated with the pictures. Subjects also filled Three-Factor Eating Questionnaire to determine their eating behavior. The inter-beat-interval time series and the HRV parameters were extracted from the ECG. The quick change in HRV parameters were studied by calculating the change from baseline value (10 s window before stimulus) to value after the onset of the stimulus (10 s window during stimulus). The paired t-test showed significant difference between positive and negative food pictures but not between positive and negative non-food pictures. All the HRV parameters decreased for positive food pictures while they stayed the same or increased a little for negative food pictures. The eating behavior characteristic cognitive restraint was negatively correlated with HRV parameters that describe decreasing of heart rate.

  9. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1976-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  10. Dissociating object-based from egocentric transformations in mental body rotation: effect of stimuli size.

    PubMed

    Habacha, Hamdi; Moreau, David; Jarraya, Mohamed; Lejeune-Poutrain, Laure; Molinaro, Corinne

    2018-01-01

    The effect of stimuli size on the mental rotation of abstract objects has been extensively investigated, yet its effect on the mental rotation of bodily stimuli remains largely unexplored. Depending on the experimental design, mentally rotating bodily stimuli can elicit object-based transformations, relying mainly on visual processes, or egocentric transformations, which typically involve embodied motor processes. The present study included two mental body rotation tasks requiring either a same-different or a laterality judgment, designed to elicit object-based or egocentric transformations, respectively. Our findings revealed shorter response times for large-sized stimuli than for small-sized stimuli only for greater angular disparities, suggesting that the more unfamiliar the orientations of the bodily stimuli, the more stimuli size affected mental processing. Importantly, when comparing size transformation times, results revealed different patterns of size transformation times as a function of angular disparity between object-based and egocentric transformations. This indicates that mental size transformation and mental rotation proceed differently depending on the mental rotation strategy used. These findings are discussed with respect to the different spatial manipulations involved during object-based and egocentric transformations.

  11. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  12. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    PubMed

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  13. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1973-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  14. Consistency of Border-Ownership Cells across Artificial Stimuli, Natural Stimuli, and Stimuli with Ambiguous Contours.

    PubMed

    Hesse, Janis K; Tsao, Doris Y

    2016-11-02

    Segmentation and recognition of objects in a visual scene are two problems that are hard to solve separately from each other. When segmenting an ambiguous scene, it is helpful to already know the present objects and their shapes. However, for recognizing an object in clutter, one would like to consider its isolated segment alone to avoid confounds from features of other objects. Border-ownership cells (Zhou et al., 2000) appear to play an important role in segmentation, as they signal the side-of-figure of artificial stimuli. The present work explores the role of border-ownership cells in dorsal macaque visual areas V2 and V3 in the segmentation of natural object stimuli and locally ambiguous stimuli. We report two major results. First, compared with previous estimates, we found a smaller percentage of cells that were consistent across artificial stimuli used previously. Second, we found that the average response of those neurons that did respond consistently to the side-of-figure of artificial stimuli also consistently signaled, as a population, the side-of-figure for borders of single faces, occluding faces and, with higher latencies, even stimuli with illusory contours, such as Mooney faces and natural faces completely missing local edge information. In contrast, the local edge or the outlines of the face alone could not always evoke a significant border-ownership signal. Our results underscore that border ownership is coded by a population of cells, and indicate that these cells integrate a variety of cues, including low-level features and global object context, to compute the segmentation of the scene. To distinguish different objects in a natural scene, the brain must segment the image into regions corresponding to objects. The so-called "border-ownership" cells appear to be dedicated to this task, as they signal for a given edge on which side the object is that owns it. Here, we report that individual border-ownership cells are unreliable when tested across

  15. Contextual control using a go/no-go procedure with compound abstract stimuli.

    PubMed

    Modenesi, Rafael Diego; Debert, Paula

    2015-05-01

    Contextual control has been described as (1) a five-term contingency, in which the contextual stimulus exerts conditional control over conditional discriminations, and (2) allowing one stimulus to be a member of different equivalence classes without merging them into one. Matching-to-sample is the most commonly employed procedure to produce and study contextual control. The present study evaluated whether the go/no-go procedure with compound stimuli produces equivalence classes that share stimuli. This procedure does not allow the identification of specific stimulus functions (e.g., contextual, conditional, or discriminative functions). If equivalence classes were established with this procedure, then only the latter part of the contextual control definition (2) would be met. Six undergraduate students participated in the present study. In the training phases, responses to AC, BD, and XY compounds with stimuli from the same classes were reinforced, and responses to AC, BD, and XY compounds with stimuli from different classes were not. In addition, responses to X1A1B1, X1A2B2, X2A1B2, and X2A2B1 compounds were reinforced and responses to the other combinations were not. During the tests, the participants had to respond to new combinations of stimuli compounds YCD to indicate the formation of four equivalence classes that share stimuli: X1A1B1Y1C1D1, X1A2B2Y1C2D2, X2A1B2Y2C1D2, and X2A2B1Y2C2D1. Four of the six participants showed the establishment of these classes. These results indicate that establishing contextual stimulus functions is unnecessary to produce equivalence classes that share stimuli. Therefore, these results are inconsistent with the first part of the definition of contextual control. © Society for the Experimental Analysis of Behavior.

  16. Differences in apparent straightness of dot and line stimuli.

    NASA Technical Reports Server (NTRS)

    Parlee, M. B.

    1972-01-01

    An investigation has been made of anisotropic responses to contoured and noncontoured stimuli to obtain an insight into the way these stimuli are processed. For this purpose, eight subjects judged the alignment of minimally contoured (3 dot) and contoured (line) stimuli. Stimuli, presented to each eye separately, vertically subtended either 8 or 32 deg visual angle and were located 10 deg left, center, or 10 deg right in the visual field. Location-dependent deviations from physical straightness were larger for dot stimuli than for lines. The results were the same for the two eyes. In a second experiment, subjects judged the alignment of stimuli composed of different densities of dots. Apparent straightness for these stimuli was the same as for lines. The results are discussed in terms of alternative mechanisms for analysis of contoured and minimally contoured stimuli.

  17. Paths with more turns are perceived as longer: misperceptions with map-based and abstracted path stimuli.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Taylor, Holly A

    2015-04-01

    When navigating, people tend to overestimate distances when routes contain more turns, termed the route-angularity effect. Three experiments examined the source and generality of this effect. The first two experiments examined whether route-angularity effects occur while viewing maps and might be related to sex differences or sense of direction. The third experiment tested whether the route-angularity effect would occur with stimuli devoid of spatial context, reducing influences of environmental experience and visual complexity. In the three experiments, participants (N=1,552; M=32.2 yr.; 992 men, 560 women) viewed paths plotted on maps (Exps. 1 and 2) or against a blank background (Exp. 3). The depicted paths were always the same overall length, but varied in the number of turns (from 1 to 7) connecting an origin and destination. Participants were asked to estimate the time to traverse each path (Exp. 1) or the length of each path (Exps. 2 and 3). The Santa Barbara Sense of Direction questionnaire was administered to assess whether overall spatial sense of direction would be negatively related to the magnitude of the route-angularity effect. Repeated-measures analyses of variance (ANOVAs) indicated that paths with more turns elicited estimates of greater distance and travel times, whether they were depicted on maps or blank backgrounds. Linear regressions also indicated that these effects were significantly larger in those with a relatively low sense of direction. The results support the route-angularity effect and extend it to paths plotted on map-based stimuli. Furthermore, because the route-angularity effect was shown with paths plotted against blank backgrounds, route-angularity effects are not specific to understanding environments and may arise at the level of visual perception.

  18. Visual categorization of natural movies by rats.

    PubMed

    Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P

    2014-08-06

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.

  19. Postural Instability Induced by Visual Motion Stimuli in Patients With Vestibular Migraine

    PubMed Central

    Lim, Yong-Hyun; Kim, Ji-Soo; Lee, Ho-Won; Kim, Sung-Hee

    2018-01-01

    Patients with vestibular migraine are susceptible to motion sickness. This study aimed to determine whether the severity of posture instability is related to the susceptibility to motion sickness. We used a visual motion paradigm with two conditions of the stimulated retinal field and the head posture to quantify postural stability while maintaining a static stance in 18 patients with vestibular migraine and in 13 age-matched healthy subjects. Three parameters of postural stability showed differences between VM patients and controls: RMS velocity (0.34 ± 0.02 cm/s vs. 0.28 ± 0.02 cm/s), RMS acceleration (8.94 ± 0.74 cm/s2 vs. 6.69 ± 0.87 cm/s2), and sway area (1.77 ± 0.22 cm2 vs. 1.04 ± 0.25 cm2). Patients with vestibular migraine showed marked postural instability of the head and neck when visual stimuli were presented in the retinal periphery. The pseudo-Coriolis effect induced by head roll tilt was not responsible for the main differences in postural instability between patients and controls. Patients with vestibular migraine showed a higher visual dependency and low stability of the postural control system when maintaining quiet standing, which may be related to susceptibility to motion sickness. PMID:29930534

  20. Postural Instability Induced by Visual Motion Stimuli in Patients With Vestibular Migraine.

    PubMed

    Lim, Yong-Hyun; Kim, Ji-Soo; Lee, Ho-Won; Kim, Sung-Hee

    2018-01-01

    Patients with vestibular migraine are susceptible to motion sickness. This study aimed to determine whether the severity of posture instability is related to the susceptibility to motion sickness. We used a visual motion paradigm with two conditions of the stimulated retinal field and the head posture to quantify postural stability while maintaining a static stance in 18 patients with vestibular migraine and in 13 age-matched healthy subjects. Three parameters of postural stability showed differences between VM patients and controls: RMS velocity (0.34 ± 0.02 cm/s vs. 0.28 ± 0.02 cm/s), RMS acceleration (8.94 ± 0.74 cm/s 2 vs. 6.69 ± 0.87 cm/s 2 ), and sway area (1.77 ± 0.22 cm 2 vs. 1.04 ± 0.25 cm 2 ). Patients with vestibular migraine showed marked postural instability of the head and neck when visual stimuli were presented in the retinal periphery. The pseudo-Coriolis effect induced by head roll tilt was not responsible for the main differences in postural instability between patients and controls. Patients with vestibular migraine showed a higher visual dependency and low stability of the postural control system when maintaining quiet standing, which may be related to susceptibility to motion sickness.

  1. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  2. Gender differences in the processing of standard emotional visual stimuli: integrating ERP and fMRI results

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin

    2005-04-01

    The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.

  3. Information processing of visually presented picture and word stimuli by young hearing-impaired and normal-hearing children.

    PubMed

    Kelly, R R; Tomlison-Keasey, C

    1976-12-01

    Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

  4. Human observers have optimal introspective access to perceptual processes even for visually masked stimuli

    PubMed Central

    Peters, Megan A K; Lau, Hakwan

    2015-01-01

    Many believe that humans can ‘perceive unconsciously’ – that for weak stimuli, briefly presented and masked, above-chance discrimination is possible without awareness. Interestingly, an online survey reveals that most experts in the field recognize the lack of convincing evidence for this phenomenon, and yet they persist in this belief. Using a recently developed bias-free experimental procedure for measuring subjective introspection (confidence), we found no evidence for unconscious perception; participants’ behavior matched that of a Bayesian ideal observer, even though the stimuli were visually masked. This surprising finding suggests that the thresholds for subjective awareness and objective discrimination are effectively the same: if objective task performance is above chance, there is likely conscious experience. These findings shed new light on decades-old methodological issues regarding what it takes to consider a neurobiological or behavioral effect to be 'unconscious,' and provide a platform for rigorously investigating unconscious perception in future studies. DOI: http://dx.doi.org/10.7554/eLife.09651.001 PMID:26433023

  5. Brain reactivity to visual food stimuli after moderate-intensity exercise in children.

    PubMed

    Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; Larson, Michael J; Keller, Kathleen L; Fearnbach, S Nicole; Evans, Alyssa; LeCheminant, James D

    2017-09-19

    Exercise may play a role in moderating eating behaviors. The purpose of this study was to examine the effect of an acute bout of exercise on neural responses to visual food stimuli in children ages 8-11 years. We hypothesized that acute exercise would result in reduced activity in reward areas of the brain. Using a randomized cross-over design, 26 healthy weight children completed two separate laboratory conditions (exercise; sedentary). During the exercise condition, each participant completed a 30-min bout of exercise at moderate-intensity (~ 67% HR maximum) on a motor-driven treadmill. During the sedentary session, participants sat continuously for 30 min. Neural responses to high- and low-calorie pictures of food were determined immediately following each condition using functional magnetic resonance imaging. There was a significant exercise condition*stimulus-type (high- vs. low-calorie pictures) interaction in the left hippocampus and right medial temporal lobe (p < 0.05). Main effects of exercise condition were observed in the left posterior central gyrus (reduced activation after exercise) (p < 0.05) and the right anterior insula (greater activation after exercise) (p < 0.05). The left hippocampus, right medial temporal lobe, left posterior central gyrus, and right anterior insula appear to be activated by visual food stimuli differently following an acute bout of exercise compared to a non-exercise sedentary session in 8-11 year-old children. Specifically, an acute bout of exercise results in greater activation to high-calorie and reduced activation to low-calorie pictures of food in both the left hippocampus and right medial temporal lobe. This study shows that response to external food cues can be altered by exercise and understanding this mechanism will inform the development of future interventions aimed at altering energy intake in children.

  6. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon

    PubMed Central

    Rieucau, Guillaume; Burke, Darren

    2017-01-01

    Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965

  7. Language experience shapes early electrophysiological responses to visual stimuli: the effects of writing system, stimulus length, and presentation duration.

    PubMed

    Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi

    2008-02-15

    How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.

  8. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition.

    PubMed

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  9. The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye

    NASA Astrophysics Data System (ADS)

    Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis

    2008-09-01

    In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).

  10. Migraine increases centre-surround suppression for drifting visual stimuli.

    PubMed

    Battista, Josephine; Badcock, David R; McKendrick, Allison M

    2011-04-11

    The pathophysiology of migraine is incompletely understood, but evidence points to hyper-responsivity of cortical neurons being a key feature. The basis of hyper-responsiveness is not clear, with an excitability imbalance potentially arising from either reduced inhibition or increased excitation. In this study, we measure centre-surround contrast suppression in people with migraine as a perceptual analogue of the interplay between inhibition and excitation in cortical areas responsible for vision. We predicted that reduced inhibitory function in migraine would reduce perceptual surround suppression. Recent models of neuronal surround suppression incorporate excitatory feedback that drives surround inhibition. Consequently, an increase in excitation predicts an increase in perceptual surround suppression. Twenty-six people with migraine and twenty approximately age- and gender-matched non-headache controls participated. The perceived contrast of a central sinusoidal grating patch (4 c/deg stationary grating, or 2 c/deg drifting at 2 deg/sec, 40% contrast) was measured in the presence and absence of a 95% contrast annular grating (same orientation, spatial frequency, and drift rate). For the static grating, similar surround suppression strength was present in control and migraine groups with the presence of the surround resulting in the central patch appearing to be 72% and 65% of its true contrast for control and migraine groups respectively (t(44) = 0.81, p = 0.42). For the drifting stimulus, the migraine group showed significantly increased surround suppression (t(44) = 2.86, p<0.01), with perceived contrast being on average 53% of actual contrast for the migraine group and 68% for non-headache controls. In between migraines, when asymptomatic, visual surround suppression for drifting stimuli is greater in individuals with migraine than in controls. The data provides evidence for a behaviourally measurable imbalance in inhibitory and excitatory visual

  11. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  12. A comparative analysis of global and local processing of hierarchical visual stimuli in young children (Homo sapiens) and monkeys (Cebus apella).

    PubMed

    De Lillo, Carlo; Spinozzi, Giovanna; Truppa, Valentina; Naylor, Donna M

    2005-05-01

    Results obtained with preschool children (Homo sapiens) were compared with results previously obtained from capuchin monkeys (Cebus apella) in matching-to-sample tasks featuring hierarchical visual stimuli. In Experiment 1, monkeys, in contrast with children, showed an advantage in matching the stimuli on the basis of their local features. These results were replicated in a 2nd experiment in which control trials enabled the authors to rule out that children used spurious cues to solve the matching task. In a 3rd experiment featuring conditions in which the density of the stimuli was manipulated, monkeys' accuracy in the processing of the global shape of the stimuli was negatively affected by the separation of the local elements, whereas children's performance was robust across testing conditions. Children's response latencies revealed a global precedence in the 2nd and 3rd experiments. These results show differences in the processing of hierarchical stimuli by humans and monkeys that emerge early during childhood. 2005 APA, all rights reserved

  13. Dorsal hippocampus is necessary for visual categorization in rats.

    PubMed

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for

  14. Using an abstract geometry in virtual reality to explore choice behaviour: visual flicker preferences in honeybees.

    PubMed

    Van De Poll, Matthew N; Zajaczkowski, Esmi L; Taylor, Gavin J; Srinivasan, Mandyam V; van Swinderen, Bruno

    2015-11-01

    Closed-loop paradigms provide an effective approach for studying visual choice behaviour and attention in small animals. Different flying and walking paradigms have been developed to investigate behavioural and neuronal responses to competing stimuli in insects such as bees and flies. However, the variety of stimulus choices that can be presented over one experiment is often limited. Current choice paradigms are mostly constrained as single binary choice scenarios that are influenced by the linear structure of classical conditioning paradigms. Here, we present a novel behavioural choice paradigm that allows animals to explore a closed geometry of interconnected binary choices by repeatedly selecting among competing objects, thereby revealing stimulus preferences in an historical context. We used our novel paradigm to investigate visual flicker preferences in honeybees (Apis mellifera) and found significant preferences for 20-25 Hz flicker and avoidance of higher (50-100 Hz) and lower (2-4 Hz) flicker frequencies. Similar results were found when bees were presented with three simultaneous choices instead of two, and when they were given the chance to select previously rejected choices. Our results show that honeybees can discriminate among different flicker frequencies and that their visual preferences are persistent even under different experimental conditions. Interestingly, avoided stimuli were more attractive if they were novel, suggesting that novelty salience can override innate preferences. Our recursive virtual reality environment provides a new approach to studying visual discrimination and choice behaviour in animals. © 2015. Published by The Company of Biologists Ltd.

  15. Neural correlates of subliminally presented visual sexual stimuli.

    PubMed

    Wernicke, Martina; Hofter, Corinna; Jordan, Kirsten; Fromberger, Peter; Dechent, Peter; Müller, Jürgen L

    2017-03-01

    In the context of forensic psychiatry, it is crucial that diagnoses of deviant sexual interests are resistant to manipulation. In a first attempt to promote the development of such tools, the current fMRI study focusses on the examination of hemodynamic responses to preferred, in contrast to non-preferred, sexual stimuli with and without explicit sexual features in 24 healthy heterosexual subjects. The subliminal stimulus presentation of sexual stimuli could be a new approach to reduce vulnerability to manipulation. Meaningful images and scrambled images were applied as masks. Recognition performance was low, but interestingly, sexual preference and explicitness modulated stimulus visibility, suggesting interactions between networks of sexual arousal and consciousness. With scrambled masks, higher activations for sexually preferred images and for explicit images were found in areas associated with sexual arousal (Stoleru, Fonteille, Cornelis, Joyal, & Moulier, 2012). We conclude that masked sexual stimuli can evoke activations in areas associated with supraliminal induced sexual arousal. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    PubMed

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  17. Cortical response tracking the conscious experience of threshold duration visual stimuli indicates visual perception is all or none

    PubMed Central

    Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.

    2013-01-01

    At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248

  18. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  19. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  20. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors.

    PubMed

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  1. Recall and recognition hypermnesia for Socratic stimuli.

    PubMed

    Kazén, Miguel; Solís-Macías, Víctor M

    2016-01-01

    In two experiments, we investigate hypermnesia, net memory improvements with repeated testing of the same material after a single study trial. In the first experiment, we found hypermnesia across three trials for the recall of word solutions to Socratic stimuli (dictionary-like definitions of concepts) replicating Erdelyi, Buschke, and Finkelstein and, for the first time using these materials, for their recognition. In the second experiment, we had two "yes/no" recognition groups, a Socratic stimuli group presented with concrete and abstract verbal materials and a word-only control group. Using signal detection measures, we found hypermnesia for concrete Socratic stimuli-and stable performance for abstract stimuli across three recognition tests. The control group showed memory decrements across tests. We interpret these findings with the alternative retrieval pathways (ARP) hypothesis, contrasting it with alternative theories of hypermnesia, such as depth of processing, generation and retrieve-recognise. We conclude that recognition hypermnesia for concrete Socratic stimuli is a reliable phenomenon, which we found in two experiments involving both forced-choice and yes/no recognition procedures.

  2. New abstraction networks and a new visualization tool in support of auditing the SNOMED CT content.

    PubMed

    Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan

    2012-01-01

    Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT.

  3. New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content

    PubMed Central

    Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan

    2012-01-01

    Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293

  4. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  5. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  6. Visual field asymmetries in visual evoked responses

    PubMed Central

    Hagler, Donald J.

    2014-01-01

    Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151

  7. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    PubMed

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  8. Steady-state VEP responses to uncomfortable stimuli.

    PubMed

    O'Hare, Louise

    2017-02-01

    Periodic stimuli, such as op-art, can evoke a range of aversive sensations included in the term visual discomfort. Illusory motion effects are elicited by fixational eye movements, but the cortex might also contribute to effects of discomfort. To investigate this possibility, steady-state visually evoked responses (SSVEPs) to contrast-matched op-art-based stimuli were measured at the same time as discomfort judgements. On average, discomfort reduced with increasing spatial frequency of the pattern. In contrast, the peak amplitude of the SSVEP response was around the midrange spatial frequencies. Like the discomfort judgements, SSVEP responses to the highest spatial frequencies were lowest amplitude, but the relationship breaks down between discomfort and SSVEP for the lower spatial frequency stimuli. This was not explicable by gross eye movements as measured using the facial electrodes. There was a weak relationship between the peak SSVEP responses and discomfort judgements for some stimuli, suggesting that discomfort can be explained in part by electrophysiological responses measured at the level of the cortex. However, there is a breakdown of this relationship in the case of lower spatial frequency stimuli, which remains unexplained. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  10. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  11. Visual stimuli for the P300 brain-computer interface: a comparison of white/gray and green/blue flicker matrices.

    PubMed

    Takano, Kouji; Komatsu, Tomoaki; Hata, Naoki; Nakajima, Yasoichi; Kansaku, Kenji

    2009-08-01

    The white/gray flicker matrix has been used as a visual stimulus for the so-called P300 brain-computer interface (BCI), but the white/gray flash stimuli might induce discomfort. In this study, we investigated the effectiveness of green/blue flicker matrices as visual stimuli. Ten able-bodied, non-trained subjects performed Alphabet Spelling (Japanese Alphabet: Hiragana) using an 8 x 10 matrix with three types of intensification/rest flicker combinations (L, luminance; C, chromatic; LC, luminance and chromatic); both online and offline performances were evaluated. The accuracy rate under the online LC condition was 80.6%. Offline analysis showed that the LC condition was associated with significantly higher accuracy than was the L or C condition (Tukey-Kramer, p < 0.05). No significant difference was observed between L and C conditions. The LC condition, which used the green/blue flicker matrix was associated with better performances in the P300 BCI. The green/blue chromatic flicker matrix can be an efficient tool for practical BCI application.

  12. Shock-like haemodynamic responses induced in the primary visual cortex by moving visual stimuli

    PubMed Central

    Robinson, P. A.

    2016-01-01

    It is shown that recently discovered haemodynamic waves can form shock-like fronts when driven by stimuli that excite the cortex in a patch that moves faster than the haemodynamic wave velocity. If stimuli are chosen in order to induce shock-like behaviour, the resulting blood oxygen level-dependent (BOLD) response is enhanced, thereby improving the signal to noise ratio of measurements made with functional magnetic resonance imaging. A spatio-temporal haemodynamic model is extended to calculate the BOLD response and determine the main properties of waves induced by moving stimuli. From this, the optimal conditions for stimulating shock-like responses are determined, and ways of inducing these responses in experiments are demonstrated in a pilot study. PMID:27974572

  13. Representation of visual symbols in the visual word processing network.

    PubMed

    Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S

    2015-03-01

    Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    PubMed

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. How do expert soccer players encode visual information to make decisions in simulated game situations?

    PubMed

    Poplu, Gérald; Ripoll, Hubert; Mavromatis, Sébastien; Baratgin, Jean

    2008-09-01

    The aim of this study was to determine what visual information expert soccer players encode when they are asked to make a decision. We used a repetition-priming paradigm to test the hypothesis that experts encode a soccer pattern's structure independently of the players' physical characteristics (i.e., posture and morphology). The participants were given either realistic (digital photos) or abstract (three-dimensional schematic representations) soccer game patterns. The results showed that the experts benefited from priming effects regardless of how abstract the stimuli were. This suggests that an abstract representation of a realistic pattern (i.e., one that does not include visual information related to the players'physical characteristics) is sufficient to activate experts'specific knowledge during decision making. These results seem to show that expert soccer players encode and store abstract representations of visual patterns in memory.

  16. The role of early visual cortex in visual short-term memory and visual attention.

    PubMed

    Offen, Shani; Schluppeck, Denis; Heeger, David J

    2009-06-01

    We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.

  17. Neuronal activity in the lateral cerebellum of the cat related to visual stimuli at rest, visually guided step modification, and saccadic eye movements

    PubMed Central

    Marple-Horvat, D E; Criado, J M; Armstrong, D M

    1998-01-01

    The discharge patterns of 166 lateral cerebellar neurones were studied in cats at rest and during visually guided stepping on a horizontal circular ladder. A hundred and twelve cells were tested against one or both of two visual stimuli: a brief full-field flash of light delivered during eating or rest, and a rung which moved up as the cat approached. Forty-five cells (40%) gave a short latency response to one or both of these stimuli. These visually responsive neurones were found in hemispheral cortex (rather than paravermal) and the lateral cerebellar nucleus (rather than nucleus interpositus).Thirty-seven cells (of 103 tested, 36%) responded to flash. The cortical visual response (mean onset latency 38 ms) was usually an increase in Purkinje cell discharge rate, of around 50 impulses s−1 and representing 1 or 2 additional spikes per trial (1.6 on average). The nuclear response to flash (mean onset latency 27 ms) was usually an increased discharge rate which was shorter lived and converted rapidly to a depression of discharge or return to control levels, so that there were on average only an additional 0.6 spikes per trial. A straightforward explanation of the difference between the cortical and nuclear response would be that the increased inhibitory Purkinje cell output cuts short the nuclear response.A higher proportion of cells responded to rung movement, sixteen of twenty-five tested (64%). Again most responded with increased discharge, which had longer latency than the flash response (first change in dentate output ca 60 ms after start of movement) and longer duration. Peak frequency changes were twice the size of those in response to flash, at 100 impulses s−1 on average and additional spikes per trial were correspondingly 3–4 times higher. Both cortical and nuclear responses were context dependent, being larger when the rung moved when the cat was closer than further away.A quarter of cells (20 of 84 tested, 24%) modulated their activity in advance

  18. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    PubMed

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  19. l-Theanine and caffeine improve target-specific attention to visual stimuli by decreasing mind wandering: a human functional magnetic resonance imaging study.

    PubMed

    Kahathuduwa, Chanaka N; Dhanasekara, Chathurika S; Chin, Shao-Hua; Davis, Tyler; Weerasinghe, Vajira S; Dassanayake, Tharaka L; Binks, Martin

    2018-01-01

    Oral intake of l-theanine and caffeine supplements is known to be associated with faster stimulus discrimination, possibly via improving attention to stimuli. We hypothesized that l-theanine and caffeine may be bringing about this beneficial effect by increasing attention-related neural resource allocation to target stimuli and decreasing deviation of neural resources to distractors. We used functional magnetic resonance imaging (fMRI) to test this hypothesis. Solutions of 200mg of l-theanine, 160mg of caffeine, their combination, or the vehicle (distilled water; placebo) were administered in a randomized 4-way crossover design to 9 healthy adult men. Sixty minutes after administration, a 20-minute fMRI scan was performed while the subjects performed a visual color stimulus discrimination task. l-Theanine and l-theanine-caffeine combination resulted in faster responses to targets compared with placebo (∆=27.8milliseconds, P=.018 and ∆=26.7milliseconds, P=.037, respectively). l-Theanine was associated with decreased fMRI responses to distractor stimuli in brain regions that regulate visual attention, suggesting that l-theanine may be decreasing neural resource allocation to process distractors, thus allowing to attend to targets more efficiently. l-Theanine-caffeine combination was associated with decreased fMRI responses to target stimuli as compared with distractors in several brain regions that typically show increased activation during mind wandering. Factorial analysis suggested that l-theanine and caffeine seem to have a synergistic action in decreasing mind wandering. Therefore, our hypothesis is that l-theanine and caffeine may be decreasing deviation of attention to distractors (including mind wandering); thus, enhancing attention to target stimuli was confirmed. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Visual attention and emotional reactions to negative stimuli: The role of age and cognitive reappraisal.

    PubMed

    Wirth, Maria; Isaacowitz, Derek M; Kunzmann, Ute

    2017-09-01

    Prominent life span theories of emotion propose that older adults attend less to negative emotional information and report less negative emotional reactions to the same information than younger adults do. Although parallel age differences in affective information processing and age differences in emotional reactivity have been proposed, they have rarely been investigated within the same study. In this eye-tracking study, we tested age differences in visual attention and emotional reactivity, using standardized emotionally negative stimuli. Additionally, we investigated age differences in the association between visual attention and emotional reactivity, and whether these are moderated by cognitive reappraisal. Older as compared with younger adults showed fixation patterns away from negative image content, while they reacted with greater negative emotions. The association between visual attention and emotional reactivity differed by age group and positive reappraisal. Younger adults felt better when they attended more to negative content rather than less, but this relationship only held for younger adults who did not attach a positive meaning to the negative situation. For older adults, overall, there was no significant association between visual attention and emotional reactivity. However, for older adults who did not use positive reappraisal, decreases in attention to negative information were associated with less negative emotions. The present findings point to a complex relationship between younger and older adults' visual attention and emotional reactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    PubMed Central

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  2. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies.

    PubMed

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A

    2016-08-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving "live partial-area taxonomies" is demonstrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. The role of automatic orienting of attention towards ipsilesional stimuli in non-visual (tactile and auditory) neglect: a critical review.

    PubMed

    Gainotti, Guido

    2010-02-01

    The aim of the present survey was to review scientific articles dealing with the non-visual (auditory and tactile) forms of neglect to determine: (a) whether behavioural patterns similar to those observed in the visual modality can also be observed in the non-visual modalities; (b) whether a different severity of neglect can be found in the visual and in the auditory and tactile modalities; (c) the reasons for the possible differences between the visual and non-visual modalities. Data pointing to a contralesional orienting of attention in the auditory and the tactile modalities in visual neglect patients were separately reviewed. Results showed: (a) that in patients with right brain damage manifestations of neglect for the contralesional side of space can be found not only in the visual but also in the auditory and tactile modalities; (b) that the severity of neglect is greater in the visual than in the non-visual modalities. This asymmetry in the severity of neglect across modalities seems due to the greater role that the automatic capture of attention by irrelevant ipsilesional stimuli seems to play in the visual modality. Copyright 2009 Elsevier Srl. All rights reserved.

  4. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    PubMed

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear

  5. Abstract representations of associated emotions in the human brain.

    PubMed

    Kim, Junsuk; Schultz, Johannes; Rohe, Tim; Wallraven, Christian; Lee, Seong-Whan; Bülthoff, Heinrich H

    2015-04-08

    Emotions can be aroused by various kinds of stimulus modalities. Recent neuroimaging studies indicate that several brain regions represent emotions at an abstract level, i.e., independently from the sensory cues from which they are perceived (e.g., face, body, or voice stimuli). If emotions are indeed represented at such an abstract level, then these abstract representations should also be activated by the memory of an emotional event. We tested this hypothesis by asking human participants to learn associations between emotional stimuli (videos of faces or bodies) and non-emotional stimuli (fractals). After successful learning, fMRI signals were recorded during the presentations of emotional stimuli and emotion-associated fractals. We tested whether emotions could be decoded from fMRI signals evoked by the fractal stimuli using a classifier trained on the responses to the emotional stimuli (and vice versa). This was implemented as a whole-brain searchlight, multivoxel activation pattern analysis, which revealed successful emotion decoding in four brain regions: posterior cingulate cortex (PCC), precuneus, MPFC, and angular gyrus. The same analysis run only on responses to emotional stimuli revealed clusters in PCC, precuneus, and MPFC. Multidimensional scaling analysis of the activation patterns revealed clear clustering of responses by emotion across stimulus types. Our results suggest that PCC, precuneus, and MPFC contain representations of emotions that can be evoked by stimuli that carry emotional information themselves or by stimuli that evoke memories of emotional stimuli, while angular gyrus is more likely to take part in emotional memory retrieval. Copyright © 2015 the authors 0270-6474/15/355655-09$15.00/0.

  6. Analysis of discriminative control by social behavioral stimuli

    PubMed Central

    Hake, Don F.; Donaldson, Tom; Hyten, Cloyd

    1983-01-01

    Visual discriminative control of the behavior of one rat by the behavior of another was studied in a two-compartment chamber. Each rat's compartment had a food cup and two response keys arranged vertically next to the clear partition that separated the two rats. Illumination of the leader's key lights signaled a “search” period when a response by the leader on the unsignaled and randomly selected correct key for that trial illuminated the follower's keys. Then, a response by the follower on the corresponding key was reinforced, or a response on the incorrect key terminated the trial without reinforcement. Accuracy of following the leader increased to 85% within 15 sessions. Blocking the view of the leader reduced accuracy but not to chance levels. Apparent control by visual behavioral stimuli was also affected by auditory stimuli and a correction procedure. When white noise eliminated auditory cues, social learning was not acquired as fast nor as completely. A reductionistic position holds that behavioral stimuli are the same as nonsocial stimuli; however, that does not mean that they do not require any separate treatment. Behavioral stimuli are usually more variable than nonsocial stimuli, and further study is required to disentangle behavioral and nonsocial contributions to the stimulus control of social interactions. PMID:16812313

  7. Sex differences in interactions between nucleus accumbens and visual cortex by explicit visual erotic stimuli: an fMRI study.

    PubMed

    Lee, S W; Jeong, B S; Choi, J; Kim, J-W

    2015-01-01

    Men tend to have greater positive responses than women to explicit visual erotic stimuli (EVES). However, it remains unclear, which brain network makes men more sensitive to EVES and which factors contribute to the brain network activity. In this study, we aimed to assess the effect of sex difference on brain connectivity patterns by EVES. We also investigated the association of testosterone with brain connection that showed the effects of sex difference. During functional magnetic resonance imaging scans, 14 males and 14 females were asked to see alternating blocks of pictures that were either erotic or non-erotic. Psychophysiological interaction analysis was performed to investigate the functional connectivity of the nucleus accumbens (NA) as it related to EVES. Men showed significantly greater EVES-specific functional connection between the right NA and the right lateral occipital cortex (LOC). In addition, the right NA and the right LOC network activity was positively correlated with the plasma testosterone level in men. Our results suggest that the reason men are sensitive to EVES is the increased interaction in the visual reward networks, which is modulated by their plasma testosterone level.

  8. Qualitative Differences in the Representation of Abstract versus Concrete Words: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel

    2009-01-01

    In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…

  9. Starting research in interaction design with visuals for low-functioning children in the autistic spectrum: a protocol.

    PubMed

    Parés, Narcís; Carreras, Anna; Durany, Jaume; Ferrer, Jaume; Freixa, Pere; Gómez, David; Kruglanski, Orit; Parés, Roc; Ribas, J Ignasi; Soler, Miquel; Sanjurjo, Alex

    2006-04-01

    On starting to think about interaction design for low-functioning persons in the autistic spectrum (PAS), especially children, one finds a number of questions that are difficult to answer: Can we typify the PAS user? Can we engage the user in interactive communication without generating frustrating or obsessive situations? What sort of visual stimuli can we provide? Will they prefer representational or abstract visual stimuli? Will they understand three-dimensional (3D) graphic representation? What sort of interfaces will they accept? Can we set ambitious goals such as education or therapy? Unfortunately, most of these questions have no answer yet. Hence, we decided to set an apparently simple goal: to design a "fun application," with no intention to reach the level of education or therapy. The goal was to be attained by giving the users a sense of agency--by providing first a sense of control in the interaction dialogue. Our approach to visual stimuli design has been based on the use of geometric, abstract, two-dimensional (2D), real-time computer graphics in a full-body, non-invasive, interactive space. The results obtained within the European-funded project MultiSensory Environment Design for an Interface between Autistic and Typical Expressiveness (MEDIATE) have been extremely encouraging.

  10. Serotonin 5-HTTLPR Genotype Modulates Reactive Visual Scanning of Social and Non-social Affective Stimuli in Young Children

    PubMed Central

    Christou, Antonios I.; Wallis, Yvonne; Bair, Hayley; Zeegers, Maurice; McCleery, Joseph P.

    2017-01-01

    Previous studies have documented the 5-HTTLPR polymorphisms as genetic variants that are involved in serotonin availability and also associated with emotion regulation and facial emotion processing. In particular, neuroimaging and behavioral studies of healthy populations have produced evidence to suggest that carriers of the Short allele exhibit heightened neurophysiological and behavioral reactivity when processing aversive stimuli, particularly in brain regions involved in fear. However, an additional distinction has emerged in the field, which highlights particular types of fearful information, i.e., aversive information which involves a social component versus non-social aversive stimuli. Although processing of each of these stimulus types (social and non-social) is believed to involve a subcortical neural system which includes the amygdala, evidence also suggests that the amygdala itself may be particularly responsive to socially significant environmental information, potentially due to the critical relevance of social information for humans. Examining individual differences in neurotransmitter systems which operate within this subcortical network, and in particular the serotonin system, may be critically informative for furthering our understanding of the neurobiological mechanisms underlying responses to emotional and affective stimuli. In the present study we examine visual scanning patterns in response to both aversive and positive images of a social or non-social nature in relation to 5-HTTLPR genotypes, in 49 children aged 4–7 years. Results indicate that children with at least one Short 5-HTTLPR allele spent less time fixating the threat-related non-social stimuli, compared with participants with two copies of the Long allele. Interestingly, a separate set of analyses suggests that carriers of two copies of the short 5-HTTLPR allele also spent less time fixating both the negative and positive non-social stimuli. Together, these findings support the

  11. Generating Stimuli for Neuroscience Using PsychoPy.

    PubMed

    Peirce, Jonathan W

    2008-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  12. Patterned light flash evoked short latency activity in the visual system of visually normal and in amblyopic subjects.

    PubMed

    Sjöström, A; Abrahamsson, M

    1994-04-01

    In a previous experimental study on anaesthetized cat it was shown that a short latency (35-40 ms) cortical potential changed polarity due to the presence or absence of a pattern in the flash stimulus. The results suggested one pathway of neuronal activation in the cortex to a pattern that was within the level of resolution and another to patterns that were not. It was implied that a similar difference in impulse transmission to pattern and non-pattern stimuli may be recorded in humans. The present paper describes recordings of the short-latency visual evoked response to varying light flash checkerboard pattern stimuli of high intensity in visually normal and amblyopic children and adults. When stimulating the normal eye a visual evoked response potential with a peak latency between 35 to 40 ms showed a polarity change to patterned compared to non-patterned stimulation. The visual evoked response resolution limit could be correlated to a visual acuity of 0.5 and below. In amblyopic eyes the shift in polarity was recorded at the acuity limit level. The latency of the pattern depending potential was increased in patients with amblyopia compared to normal, but not directly related to amblyopic degree. It is concluded that the short latency, visual evoked response that mainly represents the retino-geniculo-cortical activation may be used to estimate visual resolution below 0.5 in acuity level.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Effects of ambient stimuli on measures of behavioral state and microswitch use in adults with profound multiple impairments.

    PubMed

    Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B

    2004-01-01

    The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.

  14. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    PubMed Central

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  15. Age-related differences in audiovisual interactions of semantically different stimuli.

    PubMed

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Freezing Behavior as a Response to Sexual Visual Stimuli as Demonstrated by Posturography

    PubMed Central

    Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre

    2015-01-01

    Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure’s displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions. PMID:25992571

  17. Freezing behavior as a response to sexual visual stimuli as demonstrated by posturography.

    PubMed

    Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre

    2015-01-01

    Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure's displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions.

  18. Probing the influence of unconscious fear-conditioned visual stimuli on eye movements.

    PubMed

    Madipakkam, Apoorva Rajiv; Rothkirch, Marcus; Wilbertz, Gregor; Sterzer, Philipp

    2016-11-01

    Efficient threat detection from the environment is critical for survival. Accordingly, fear-conditioned stimuli receive prioritized processing and capture overt and covert attention. However, it is unknown whether eye movements are influenced by unconscious fear-conditioned stimuli. We performed a classical fear-conditioning procedure and subsequently recorded participants' eye movements while they were exposed to fear-conditioned stimuli that were rendered invisible using interocular suppression. Chance-level performance in a forced-choice-task demonstrated unawareness of the stimuli. Differential skin conductance responses and a change in participants' fearfulness ratings of the stimuli indicated the effectiveness of conditioning. However, eye movements were not biased towards the fear-conditioned stimulus. Preliminary evidence suggests a relation between the strength of conditioning and the saccadic bias to the fear-conditioned stimulus. Our findings provide no strong evidence for a saccadic bias towards unconscious fear-conditioned stimuli but tentative evidence suggests that such an effect may depend on the strength of the conditioned response. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Art expertise modulates the emotional response to modern art, especially abstract: an ERP investigation

    PubMed Central

    Else, Jane E.; Ellis, Jason; Orme, Elizabeth

    2015-01-01

    Art is one of life’s great joys, whether it is beautiful, ugly, sublime or shocking. Aesthetic responses to visual art involve sensory, cognitive and visceral processes. Neuroimaging studies have yielded a wealth of information regarding aesthetic appreciation and beauty using visual art as stimuli, but few have considered the effect of expertise on visual and visceral responses. To study the time course of visual, cognitive and emotional processes in response to visual art we investigated the event-related potentials (ERPs) elicited whilst viewing and rating the visceral affect of three categories of visual art. Two groups, artists and non-artists viewed representational, abstract and indeterminate 20th century art. Early components, particularly the N1, related to attention and effort, and the P2, linked to higher order visual processing, was enhanced for artists when compared to non-artists. This effect was present for all types of art, but further enhanced for abstract art (AA), which was rated as having lowest visceral affect by the non-artists. The later, slow wave processes (500–1000 ms), associated with arousal and sustained attention, also show clear differences between the two groups in response to both type of art and visceral affect. AA increased arousal and sustained attention in artists, whilst it decreased in non-artists. These results suggest that aesthetic response to visual art is affected by both expertise and semantic content. PMID:27242497

  20. Bayesian-based integration of multisensory naturalistic perithreshold stimuli.

    PubMed

    Regenbogen, Christina; Johansson, Emilia; Andersson, Patrik; Olsson, Mats J; Lundström, Johan N

    2016-07-29

    Most studies exploring multisensory integration have used clearly perceivable stimuli. According to the principle of inverse effectiveness, the added neural and behavioral benefit of integrating clear stimuli is reduced in comparison to stimuli with degraded and less salient unisensory information. Traditionally, speed and accuracy measures have been analyzed separately with few studies merging these to gain an understanding of speed-accuracy trade-offs in multisensory integration. In two separate experiments, we assessed multisensory integration of naturalistic audio-visual objects consisting of individually-tailored perithreshold dynamic visual and auditory stimuli, presented within a multiple-choice task, using a Bayesian Hierarchical Drift Diffusion Model that combines response time and accuracy. For both experiments, unisensory stimuli were degraded to reach a 75% identification accuracy level for all individuals and stimuli to promote multisensory binding. In Experiment 1, we subsequently presented uni- and their respective bimodal stimuli followed by a 5-alternative-forced-choice task. In Experiment 2, we controlled for low-level integration and attentional differences. Both experiments demonstrated significant superadditive multisensory integration of bimodal perithreshold dynamic information. We present evidence that the use of degraded sensory stimuli may provide a link between previous findings of inverse effectiveness on a single neuron level and overt behavior. We further suggest that a combined measure of accuracy and reaction time may be a more valid and holistic approach of studying multisensory integration and propose the application of drift diffusion models for studying behavioral correlates as well as brain-behavior relationships of multisensory integration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Behavioral and neural indices of affective coloring for neutral social stimuli

    PubMed Central

    Schaefer, Stacey M; Lapate, Regina C; Schoen, Andrew J; Gresham, Lauren K; Mumford, Jeanette A; Davidson, Richard J

    2018-01-01

    Abstract Emotional processing often continues beyond the presentation of emotionally evocative stimuli, which can result in affective biasing or coloring of subsequently encountered events. Here, we describe neural correlates of affective coloring and examine how individual differences in affective style impact the magnitude of affective coloring. We conducted functional magnetic resonance imaging in 117 adults who passively viewed negative, neutral and positive pictures presented 2 s prior to neutral faces. Brain responses to neutral faces were modulated by the valence of preceding pictures, with greater activation for faces following negative (vs positive) pictures in the amygdala, dorsomedial and lateral prefrontal cortex, ventral visual cortices, posterior superior temporal sulcus, and angular gyrus. Three days after the magnetic resonance imaging scan, participants rated their memory and liking of previously encountered neutral faces. Individuals higher in trait positive affect and emotional reappraisal rated faces as more likable when preceded by emotionally arousing (negative or positive) pictures. In addition, greater amygdala responses to neutral faces preceded by positively valenced pictures were associated with greater memory for these faces 3 days later. Collectively, these results reveal individual differences in how emotions spill over onto the processing of unrelated social stimuli, resulting in persistent and affectively biased evaluations of such stimuli. PMID:29447377

  2. Decreased visual detection during subliminal stimulation.

    PubMed

    Bareither, Isabelle; Villringer, Arno; Busch, Niko A

    2014-10-17

    What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.

  3. Determination of myopes' visual acuity using stimuli with different contrast

    NASA Astrophysics Data System (ADS)

    Ikaunieks, G.; Caure, E.; Kassaliete, E.; Meskovska, Z.

    2012-10-01

    The influence of different contrast stimuli on the myopes’ visual acuity (VA) was studied using positive (35.7), negative (-0.97) and low contrast (-0.11) Landolt optotypes. Test subjects were 13 myopes with corrected eyesight and 8 emmetropes, all of them being 20-22 years old. For VA determination the FrACT computer program was employed. In the tests it was found that for emmetropes the positive and negative contrast VA values do not differ significantly, while for myopes the respective values are better with positive than with negative contrast stimuli. These differences were the same in the measurements taken with spectacles or contact lenses. Our results also show that the retinal straylight created by clean spectacles or soft contact lenses is similar in both cases. <abstract xml:lang="lv">Dažu autoru pētījumi rāda, ka miopijas gadījumā redzes asums ir labāks ar pozitīva Vēbera kontrasta stimuliem (balts stimuls uz melna fona) nekā negatīva kontrasta stimuliem (melns stimuls uz balta fona). Šis fenomens tiek saistītas ar neirālām izmaiņām ON un OFF ceļos un miopiskās acīs. Citi pētījumi rāda, ka arī acī izkliedētās gaismas ietekmē labāks redzes asums ir ar pozitīviem kontrasta stimuliem nekā negatīva. Miopijas gadījumā papildus gaismas izkliedi rada briļļu lēcas vai kontaktlēcas. Mēs savā pētījumā vēlējāmies noskaidrot, cik lielā mērā labāks redzes asums ar pozitīva kontrasta stimuliem miopiskās acīs ir saistāms ar optiskās korekcijas radīto gaismas izkliedi. Pētījumā piedalījās 21 dalībnieks - 8 emetropi un 13 miopi ar sfērisko refrakcijas lielumu no -1.25 līdz -6,25 D. Dalībnieku vecums bija no 20 līdz 22 gadi. Izmantojot FrACT datorprogrammu, tika noteiks monokulārais redzes asums VA ar Landolta gredzeniem pie pozitīva, negatīva un zema kontrasta fotopiskos apstākļos. Vēbera kontrasti stimuliem attiecīgi bija 35.7, -0.97 un -0.11. Miopiem mērījumi tika veikti gan ar brillēm, gan

  4. Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.

    PubMed

    Peteranderl, Sonja; Oberauer, Klaus

    2018-01-01

    This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.

  5. Active versus passive maintenance of visual nonverbal memory.

    PubMed

    McKeown, Denis; Holt, Jessica; Delvenne, Jean-Francois; Smith, Amy; Griffiths, Benjamin

    2014-08-01

    Forgetting over the short term has challenged researchers for more than a century, largely because of the difficulty of controlling what goes on within the memory retention interval. But the "recent-negative-probe" procedure offers a valuable paradigm, by examining the influences of (presumably) unattended memoranda from prior trials. Here we used a recent-probe task to investigate forgetting for visual nonverbal short-term memory. The target stimuli (two visually presented abstract shapes) on a trial were followed after a retention interval by a probe, and participants indicated whether the probe matched one of the target items. Proactive interference, and hence memory for old trial probes, was observed, whereby participants were slowed in rejecting a nonmatching probe on the current trial that nevertheless matched a target item on the previous trial (a recent-negative probe). The attraction of the paradigm is that, by uncovering proactive influences of past-trial probe stimuli, it can be argued that active maintenance in memory of those probes is unlikely. In two experiments, we recorded such proactive interference of prior-trial items over a range of interstimulus (ISI) and intertrial (ITI) intervals (between 1 and 6 s, respectively). Consistent with a proposed two-process memory conception (the active-passive memory model, or APM), actively maintained memories on current trials decayed, but passively "maintained," or unattended, visual memories of stimuli on past trials did not.

  6. Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.

    PubMed

    Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H

    2016-01-01

    Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.

  7. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  8. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  9. The eye-tracking of social stimuli in patients with Rett syndrome and autism spectrum disorders: a pilot study.

    PubMed

    Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana

    2015-05-01

    To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.

  10. Challenging Cognitive Control by Mirrored Stimuli in Working Memory Matching

    PubMed Central

    Wirth, Maria; Gaschler, Robert

    2017-01-01

    Cognitive conflict has often been investigated by placing automatic processing originating from learned associations in competition with instructed task demands. Here we explore whether mirror generalization as a congenital mechanism can be employed to create cognitive conflict. Past research suggests that the visual system automatically generates an invariant representation of visual objects and their mirrored counterparts (i.e., mirror generalization), and especially so for lateral reversals (e.g., a cup seen from the left side vs. right side). Prior work suggests that mirror generalization can be reduced or even overcome by learning (i.e., for those visual objects for which it is not appropriate, such as letters d and b). We, therefore, minimized prior practice on resolving conflicts involving mirror generalization by using kanji stimuli as non-verbal and unfamiliar material. In a 1-back task, participants had to check a stream of kanji stimuli for identical repetitions and avoid miss-categorizing mirror reversed stimuli as exact repetitions. Consistent with previous work, lateral reversals led to profound slowing of reaction times and lower accuracy in Experiment 1. Yet, different from previous reports suggesting that lateral reversals lead to stronger conflict, similar slowing for vertical and horizontal mirror transformations was observed in Experiment 2. Taken together, the results suggest that transformations of visual stimuli can be employed to challenge cognitive control in the 1-back task. PMID:28503160

  11. The company they keep: Background similarity influences transfer of aftereffects from second- to first-order stimuli

    PubMed Central

    Qian, Ning; Dayan, Peter

    2013-01-01

    A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217

  12. The effects of neck flexion on cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in related sensory cortices

    PubMed Central

    2012-01-01

    Background A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Methods Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10–20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Results Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Conclusions Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections. PMID:23199306

  13. Imagining the truth and the moon: an electrophysiological study of abstract and concrete word processing.

    PubMed

    Gullick, Margaret M; Mitra, Priya; Coch, Donna

    2013-05-01

    Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.

  14. To each its own? Gender differences in affective, autonomic, and behavioral responses to same-sex and opposite-sex visual sexual stimuli.

    PubMed

    Sarlo, Michela; Buodo, Giulia

    2017-03-15

    A large body of research on gender differences in response to erotic stimuli has focused on genital and/or subjective sexual arousal. On the other hand, studies assessing gender differences in emotional psychophysiological responding to sexual stimuli have only employed erotic pictures of male-female couples or female/male nudes. The present study aimed at investigating differences between gynephilic men and androphilic women in emotional responding to visual sexual stimuli depicting female-male, female-female and male-male couples. Affective responses were explored in multiple response systems, including autonomic indices of emotional activation, i.e., heart rate and skin conductance, along with standardized measures of valence and arousal. Blood pressure was measured as an index of autonomic activation associated with sexual arousal, and free viewing times as an index of interest/avoidance. Overall, men showed gender-specific activation characterized by clearly appetitive reactions to the target of their sexual attraction (i.e., women), with physiological arousal discriminating female-female stimuli as the most effective sexual cues. In contrast, women's emotional activation to sexual stimuli was clearly non-specific in most of the considered variables, with the notable exception of the self-report measures. Overall, affective responses replicate patterns of gender-specific and gender-nonspecific sexual responses in gynephilic men and androphilic women. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    PubMed

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  16. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    PubMed

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  17. [Effects of visual optical stimuli for accommodation-convergence system on asthenopia].

    PubMed

    Iwasaki, Tsuneto; Tawara, Akihiko; Miyake, Nobuyuki

    2006-01-01

    We investigated the effect on eyestrain of optical stimuli that we designed for accommodation and convergence systems. Eight female students were given optical stimuli for accommodation and convergence systems for 1.5 min immediately after 20 min of a sustained task on a 3-D display. Before and after the trial, their ocular functions were measured and their symptoms were assessed. The optical stimuli were applied by moving targets of scenery images far and near around the far point position of both eyes on a horizonal place, which induced divergence in the direction of the eye position of rest. In a control group, subjects rested with closed eyes for 1.5 min instead of applying the optical stimuli. There were significant changes in the accommodative contraction time (from far to near) and the accommodative relaxation time (from near to far) and the lag of accommodation at near target, from 1.26 s to 1.62 s and from 1.49 s to 1.63 s and from 0.5 D to 0.65 D, respectively, and in the symptoms in the control group after the duration of closed-eye rest. In the stimulus group, however, the changes of those functions were smaller than in the control group. From these results, we suggest that our designed optical stimuli for accommodation and convergence systems are effective on asthenopia following accommodative dysfunction.

  18. Different Visual Preference Patterns in Response to Simple and Complex Dynamic Social Stimuli in Preschool-Aged Children with Autism Spectrum Disorders

    PubMed Central

    Shi, Lijuan; Zhou, Yuanyue; Ou, Jianjun; Gong, Jingbo; Wang, Suhong; Cui, Xilong; Lyu, Hailong; Zhao, Jingping; Luo, Xuerong

    2015-01-01

    Eye-tracking studies in young children with autism spectrum disorder (ASD) have shown a visual attention preference for geometric patterns when viewing paired dynamic social images (DSIs) and dynamic geometric images (DGIs). In the present study, eye-tracking of two different paired presentations of DSIs and DGIs was monitored in a group of 13 children aged 4 to 6 years with ASD and 20 chronologically age-matched typically developing children (TDC). The results indicated that compared with the control group, children with ASD attended significantly less to DSIs showing two or more children playing than to similar DSIs showing a single child. Visual attention preference in 4- to 6-year-old children with ASDs, therefore, appears to be modulated by the type of visual stimuli. PMID:25781170

  19. Inverse Target- and Cue-Priming Effects of Masked Stimuli

    ERIC Educational Resources Information Center

    Mattler, Uwe

    2007-01-01

    The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses…

  20. Binocular Combination of Second-Order Stimuli

    PubMed Central

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  1. An ERP study of recognition memory for concrete and abstract pictures in school-aged children

    PubMed Central

    Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J.; Jacobson, Sandra W.; Jacobson, Joseph L.

    2016-01-01

    Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N = 96; mean age = 11.3 years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as “new” or “old” (i.e., previously displayed) compared to abstract pictures. ERPs were characterised by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependant on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. PMID:27329352

  2. An empirical investigation of the visual rightness theory of picture perception.

    PubMed

    Locher, Paul J

    2003-10-01

    This research subjected the visual rightness theory of picture perception to experimental scrutiny. It investigated the ability of adults untrained in the visual arts to discriminate between reproductions of original abstract and representational paintings by renowned artists from two experimentally manipulated less well-organized versions of each art stimulus. Perturbed stimuli contained either minor or major disruptions in the originals' principal structural networks. It was found that participants were significantly more successful in discriminating between originals and their highly altered, but not slightly altered, perturbation than expected by chance. Accuracy of detection was found to be a function of style of painting and a viewer's way of thinking about a work as determined from their verbal reactions to it. Specifically, hit rates for originals were highest for abstract works when participants focused on their compositional style and form and highest for representational works when their content and realism were the focus of attention. Findings support the view that visually right (i.e., "good") compositions have efficient structural organizations that are visually salient to viewers who lack formal training in the visual arts.

  3. Interval timing in children: effects of auditory and visual pacing stimuli and relationships with reading and attention variables.

    PubMed

    Birkett, Emma E; Talcott, Joel B

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

  4. Effects of Spatial and Feature Attention on Disparity-Rendered Structure-From-Motion Stimuli in the Human Visual Cortex

    PubMed Central

    Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.

    2014-01-01

    An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974

  5. Attentional Capture by Emotional Stimuli Is Modulated by Semantic Processing

    ERIC Educational Resources Information Center

    Huang, Yang-Ming; Baddeley, Alan; Young, Andrew W.

    2008-01-01

    The attentional blink paradigm was used to examine whether emotional stimuli always capture attention. The processing requirement for emotional stimuli in a rapid sequential visual presentation stream was manipulated to investigate the circumstances under which emotional distractors capture attention, as reflected in an enhanced attentional blink…

  6. Visual Attention to Pictorial Food Stimuli in Individuals With Night Eating Syndrome: An Eye-Tracking Study.

    PubMed

    Baldofski, Sabrina; Lüthold, Patrick; Sperling, Ingmar; Hilbert, Anja

    2018-03-01

    Night eating syndrome (NES) is characterized by excessive evening and/or nocturnal eating episodes. Studies indicate an attentional bias towards food in other eating disorders. For NES, however, evidence of attentional food processing is lacking. Attention towards food and non-food stimuli was compared using eye-tracking in 19 participants with NES and 19 matched controls without eating disorders during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in initial fixation position or gaze duration. However, a significant orienting bias to food compared to non-food was found within the NES group, but not in controls. A significant attentional maintenance bias to non-food compared to food was found in both groups. Detection times did not differ between groups in the search task. Only in NES, attention to and faster detection of non-food stimuli were related to higher BMI and more evening eating episodes. The results might indicate an attentional approach-avoidance pattern towards food in NES. However, further studies should clarify the implications of attentional mechanisms for the etiology and maintenance of NES. Copyright © 2017. Published by Elsevier Ltd.

  7. Subliminal presentation of emotionally negative vs positive primes increases the perceived beauty of target stimuli.

    PubMed

    Era, Vanessa; Candidi, Matteo; Aglioti, Salvatore Maria

    2015-11-01

    Emotions have a profound influence on aesthetic experiences. Studies using affective priming procedures demonstrate, for example, that inducing a conscious negative emotional state biases the perception of abstract stimuli towards the sublime (Eskine et al. Emotion 12:1071-1074, 2012. doi: 10.1037/a0027200). Moreover, subliminal happy facial expressions have a positive impact on the aesthetic evaluation of abstract art (Flexas et al. PLoS ONE 8:e80154, 2013). Little is known about how emotion influences aesthetic perception of non-abstract, representational stimuli, especially those that are particularly relevant for social behaviour, like human bodies. Here, we explore whether the subliminal presentation of emotionally charged visual primes modulates the explicit subjective aesthetic judgment of body images. Using a forward/backward masking procedure, we presented subliminally positive and negative, arousal-matched, emotional or neutral primes and measured their effect on the explicit evaluation of perceived beauty (high vs low) and emotion (positive vs negative) evoked by abstract and body images. We found that negative primes increased subjective aesthetic evaluations of target bodies or abstract images in comparison with positive primes. No influence of primes on the emotional dimension of the targets was found, thus ruling out an unspecific arousal effect and strengthening the link between emotional valence and aesthetic appreciation. More specifically, that subliminal negative primes increase beauty ratings compared to subliminal positive primes indicates a clear link between negative emotions and positive aesthetic evaluations and vice versa, suggesting a possible link between negative emotion and the experience of sublime in art. The study expands previous research by showing the effect of subliminal negative emotions on the subjective aesthetic evaluation not only of abstract but also of body images.

  8. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  9. Conflicting demands of abstract and specific visual object processing resolved by frontoparietal networks.

    PubMed

    McMenamin, Brenton W; Marsolek, Chad J; Morseth, Brianna K; Speer, MacKenzie F; Burton, Philip C; Burgund, E Darcy

    2016-06-01

    Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities-an abstract category (AC) subsystem that operates effectively in the left hemisphere and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad frontoparietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue-probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations.

  10. Enhanced pain and autonomic responses to ambiguous visual stimuli in chronic Complex Regional Pain Syndrome (CRPS) type I.

    PubMed

    Cohen, H E; Hall, J; Harris, N; McCabe, C S; Blake, D R; Jänig, W

    2012-02-01

    Cortical reorganisation of sensory, motor and autonomic systems can lead to dysfunctional central integrative control. This may contribute to signs and symptoms of Complex Regional Pain Syndrome (CRPS), including pain. It has been hypothesised that central neuroplastic changes may cause afferent sensory feedback conflicts and produce pain. We investigated autonomic responses produced by ambiguous visual stimuli (AVS) in CRPS, and their relationship to pain. Thirty CRPS patients with upper limb involvement and 30 age and sex matched healthy controls had sympathetic autonomic function assessed using laser Doppler flowmetry of the finger pulp at baseline and while viewing a control figure or AVS. Compared to controls, there were diminished vasoconstrictor responses and a significant difference in the ratio of response between affected and unaffected limbs (symmetry ratio) to a deep breath and viewing AVS. While viewing visual stimuli, 33.5% of patients had asymmetric vasomotor responses and all healthy controls had a homologous symmetric pattern of response. Nineteen (61%) CRPS patients had enhanced pain within seconds of viewing the AVS. All the asymmetric vasomotor responses were in this group, and were not predictable from baseline autonomic function. Ten patients had accompanying dystonic reactions in their affected limb: 50% were in the asymmetric sub-group. In conclusion, there is a group of CRPS patients that demonstrate abnormal pain networks interacting with central somatomotor and autonomic integrational pathways. © 2011 European Federation of International Association for the Study of Pain Chapters.

  11. Consuming Almonds vs. Isoenergetic Baked Food Does Not Differentially Influence Postprandial Appetite or Neural Reward Responses to Visual Food Stimuli.

    PubMed

    Sayer, R Drew; Dhillon, Jaapna; Tamer, Gregory G; Cornier, Marc-Andre; Chen, Ningning; Wright, Amy J; Campbell, Wayne W; Mattes, Richard D

    2017-07-27

    Nuts have high energy and fat contents, but nut intake does not promote weight gain or obesity, which may be partially explained by their proposed high satiety value. The primary aim of this study was to assess the effects of consuming almonds versus a baked food on postprandial appetite and neural responses to visual food stimuli. Twenty-two adults (19 women and 3 men) with a BMI between 25 and 40 kg/m² completed the current study during a 12-week behavioral weight loss intervention. Participants consumed either 28 g of whole, lightly salted roasted almonds or a serving of a baked food with equivalent energy and macronutrient contents in random order on two testing days prior to and at the end of the intervention. Pre- and postprandial appetite ratings and functional magnetic resonance imaging scans were completed on all four testing days. Postprandial hunger, desire to eat, fullness, and neural responses to visual food stimuli were not different following consumption of almonds and the baked food, nor were they influenced by weight loss. These results support energy and macronutrient contents as principal determinants of postprandial appetite and do not support a unique satiety effect of almonds independent of these variables.

  12. Consuming Almonds vs. Isoenergetic Baked Food Does Not Differentially Influence Postprandial Appetite or Neural Reward Responses to Visual Food Stimuli

    PubMed Central

    Dhillon, Jaapna; Tamer, Gregory G.; Cornier, Marc-Andre; Chen, Ningning; Wright, Amy J.; Campbell, Wayne W.; Mattes, Richard D.

    2017-01-01

    Nuts have high energy and fat contents, but nut intake does not promote weight gain or obesity, which may be partially explained by their proposed high satiety value. The primary aim of this study was to assess the effects of consuming almonds versus a baked food on postprandial appetite and neural responses to visual food stimuli. Twenty-two adults (19 women and 3 men) with a BMI between 25 and 40 kg/m2 completed the current study during a 12-week behavioral weight loss intervention. Participants consumed either 28 g of whole, lightly salted roasted almonds or a serving of a baked food with equivalent energy and macronutrient contents in random order on two testing days prior to and at the end of the intervention. Pre- and postprandial appetite ratings and functional magnetic resonance imaging scans were completed on all four testing days. Postprandial hunger, desire to eat, fullness, and neural responses to visual food stimuli were not different following consumption of almonds and the baked food, nor were they influenced by weight loss. These results support energy and macronutrient contents as principal determinants of postprandial appetite and do not support a unique satiety effect of almonds independent of these variables. PMID:28749419

  13. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  14. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  15. Role of spike-frequency adaptation in shaping neuronal response to dynamic stimuli.

    PubMed

    Peron, Simon Peter; Gabbiani, Fabrizio

    2009-06-01

    Spike-frequency adaptation is the reduction of a neuron's firing rate to a stimulus of constant intensity. In the locust, the Lobula Giant Movement Detector (LGMD) is a visual interneuron that exhibits rapid adaptation to both current injection and visual stimuli. Here, a reduced compartmental model of the LGMD is employed to explore adaptation's role in selectivity for stimuli whose intensity changes with time. We show that supralinearly increasing current injection stimuli are best at driving a high spike count in the response, while linearly increasing current injection stimuli (i.e., ramps) are best at attaining large firing rate changes in an adapting neuron. This result is extended with in vivo experiments showing that the LGMD's response to translating stimuli having a supralinear velocity profile is larger than the response to constant or linearly increasing velocity translation. Furthermore, we show that the LGMD's preference for approaching versus receding stimuli can partly be accounted for by adaptation. Finally, we show that the LGMD's adaptation mechanism appears well tuned to minimize sensitivity for the level of basal input.

  16. Iconic-Memory Processing of Unfamiliar Stimuli by Retarded and Nonretarded Individuals.

    ERIC Educational Resources Information Center

    Hornstein, Henry A.; Mosley, James L.

    1979-01-01

    The iconic-memory processing of unfamiliar stimuli by 11 mentally retarded males (mean age 22 years) was undertaken employing a visually cued partial-report procedure and a visual masking procedure. (Author/CL)

  17. Visual mismatch negativity indicates automatic, task-independent detection of artistic image composition in abstract artworks.

    PubMed

    Menzel, Claudia; Kovács, Gyula; Amado, Catarina; Hayn-Leichsenring, Gregor U; Redies, Christoph

    2018-05-06

    In complex abstract art, image composition (i.e., the artist's deliberate arrangement of pictorial elements) is an important aesthetic feature. We investigated whether the human brain detects image composition in abstract artworks automatically (i.e., independently of the experimental task). To this aim, we studied whether a group of 20 original artworks elicited a visual mismatch negativity when contrasted with a group of 20 images that were composed of the same pictorial elements as the originals, but in shuffled arrangements, which destroy artistic composition. We used a passive oddball paradigm with parallel electroencephalogram recordings to investigate the detection of image type-specific properties. We observed significant deviant-standard differences for the shuffled and original images, respectively. Furthermore, for both types of images, differences in amplitudes correlated with the behavioral ratings of the images. In conclusion, we show that the human brain can detect composition-related image properties in visual artworks in an automatic fashion. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Alcoholism and Judgments of Affective Stimuli

    PubMed Central

    Clark, Uraina S.; Oscar-Berman, Marlene; Shagrin, Barbara; Pencina, Michael

    2014-01-01

    This study sought to differentiate alcoholism-related changes in judgments of emotional stimuli from those of other populations in which such changes have been documented. Two sets of visual stimuli, one containing words and the other containing drawings of faces (representing a range of emotional content), were presented to abstinent alcoholic adults with and without Korsakoff’s syndrome, as well as to a healthy control group and four groups of patients with other neurobehavioral disorders: Parkinson’s disease, schizophrenia, depression, and posttraumatic stress disorder. Participants rated the stimuli according to emotional valence and intensity of emotion. Results implicated bi-hemispheric frontal and subcortical involvement in the abnormalities of emotion identification associated with alcoholism, and they also support the notion of age-related vulnerabilities in conjunction with alcoholism. PMID:17484598

  19. Transitive Responding in Hooded Crows Requires Linearly Ordered Stimuli

    ERIC Educational Resources Information Center

    Lazareva, Olga F.; Smirnova, Anna A.; Bagozkaja, Maria S.; Zorina, Zoya A.; Rayevsky, Vladimir V.; Wasserman, Edward A.

    2004-01-01

    Eight crows were taught to discriminate overlapping pairs of visual stimuli (A+ B-, B+ C-, C+ D-, and D+ E-). For 4 birds, the stimuli were colored cards with a circle of the same color on the reverse side whose diameter decreased from A to E (ordered feedback group). These circles were made available for comparison to potentially help the crows…

  20. Abstract conceptual feature ratings predict gaze within written word arrays: evidence from a Visual Wor(l)d paradigm

    PubMed Central

    Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J

    2016-01-01

    The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eye tracking via an adaptation of the classical ‘visual word paradigm’. Healthy adults (N=20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye-movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were none the less significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. Whilst the adapted ‘visual word paradigm’ is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying

  1. An ERP study of recognition memory for concrete and abstract pictures in school-aged children.

    PubMed

    Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J; Jacobson, Sandra W; Jacobson, Joseph L

    2016-08-01

    Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N=96; mean age=11.3years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as "new" or "old" (i.e., previously displayed) compared to abstract pictures. ERPs were characterized by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependent on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Visual Memories Bypass Normalization.

    PubMed

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  3. Rapid temporal recalibration is unique to audiovisual stimuli.

    PubMed

    Van der Burg, Erik; Orchard-Mills, Emily; Alais, David

    2015-01-01

    Following prolonged exposure to asynchronous multisensory signals, the brain adapts to reduce the perceived asynchrony. Here, in three separate experiments, participants performed a synchrony judgment task on audiovisual, audiotactile or visuotactile stimuli and we used inter-trial analyses to examine whether temporal recalibration occurs rapidly on the basis of a single asynchronous trial. Even though all combinations used the same subjects, task and design, temporal recalibration occurred for audiovisual stimuli (i.e., the point of subjective simultaneity depended on the preceding trial's modality order), but none occurred when the same auditory or visual event was combined with a tactile event. Contrary to findings from prolonged adaptation studies showing recalibration for all three combinations, we show that rapid, inter-trial recalibration is unique to audiovisual stimuli. We conclude that recalibration occurs at two different timescales for audiovisual stimuli (fast and slow), but only on a slow timescale for audiotactile and visuotactile stimuli.

  4. Abstract and concrete categories? Evidences from neurodegenerative diseases.

    PubMed

    Catricalà, Eleonora; Della Rosa, Pasquale A; Plebani, Valentina; Vigliocco, Gabriella; Cappa, Stefano F

    2014-11-01

    We assessed the performance of patients with a diagnosis of Alzheimer׳s disease (AD) and of the semantic variant of primary progressive aphasia (sv-PPA) in a series of tasks involving both abstract and concrete stimuli, which were controlled for most of the variables that have been shown to affect performance on lexical-semantic tasks. Our aims were to compare the patients׳ performance on abstract and concrete stimuli and to assess category-effects within the abstract and concrete domains. The results showed: (i) a better performance on abstract than concrete concepts in sv-PPA patients. (ii) Category-related effects in the abstract domain, with emotion concepts being preserved in AD and social relations being selectively impaired in sv-PPA. In addition, a living-non living dissociation may be (infrequently) observed in individual AD patients after controlling for an extensive set of potential confounds. Thus, differences between and within the concrete or abstract domain may be present in patients with semantic memory disorders, mirroring the different brain regions involved by the different pathologies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. The pointillism method for creating stimuli suitable for use in computer-based visual contrast sensitivity testing.

    PubMed

    Turner, Travis H

    2005-03-30

    An increasingly large corpus of clinical and experimental neuropsychological research has demonstrated the utility of measuring visual contrast sensitivity. Unfortunately, existing means of measuring contrast sensitivity can be prohibitively expensive, difficult to standardize, or lack reliability. Additionally, most existing tests do not allow full control over important characteristics, such as off-angle rotations, waveform, contrast, and spatial frequency. Ideally, researchers could manipulate characteristics and display stimuli in a computerized task designed to meet experimental needs. Thus far, 256-bit color limitation in standard cathode ray tube (CRT) monitors has been preclusive. To this end, the pointillism method (PM) was developed. Using MATLAB software, stimuli are created based on both mathematical and stochastic components, such that differences in regional luminance values of the gradient field closely approximate the desired contrast. This paper describes the method and examines its performance in sine and square-wave image sets from a range of contrast values. Results suggest the utility of the method for most experimental applications. Weaknesses in the current version, the need for validation and reliability studies, and considerations regarding applications are discussed. Syntax for the program is provided in an appendix, and a version of the program independent of MATLAB is available from the author.

  8. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    NASA Astrophysics Data System (ADS)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  9. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    PubMed Central

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo

    2015-01-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases. PMID:26463272

  10. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.

    PubMed

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo

    2015-10-14

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  11. Predicting perceived visual complexity of abstract patterns using computational measures: The influence of mirror symmetry on complexity perception

    PubMed Central

    Leder, Helmut

    2017-01-01

    Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832

  12. Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.

    PubMed

    Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline

    2014-02-01

    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.

  13. Associative visual learning by tethered bees in a controlled visual environment.

    PubMed

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  14. Visual Memories Bypass Normalization

    PubMed Central

    Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam

    2018-01-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038

  15. The visual attention span deficit in dyslexia is visual and not verbal.

    PubMed

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  16. A method for automatically abstracting visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1994-01-01

    Visual documents--motion sequences on film, videotape, and digital recording--constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.

  17. Conflicting Demands of Abstract and Specific Visual Object Processing Resolved by Fronto-Parietal Networks

    PubMed Central

    McMenamin, Brenton W.; Marsolek, Chad J.; Morseth, Brianna K.; Speer, MacKenzie F.; Burton, Philip C.; Burgund, E. Darcy

    2016-01-01

    Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities – an abstract category (AC) subsystem that operates effectively in the left hemisphere, and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad fronto-parietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue/probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations. PMID:26883940

  18. Visual analysis of large heterogeneous social networks by semantic and structural abstraction.

    PubMed

    Shen, Zeqian; Ma, Kwan-Liu; Eliassi-Rad, Tina

    2006-01-01

    Social network analysis is an active area of study beyond sociology. It uncovers the invisible relationships between actors in a network and provides understanding of social processes and behaviors. It has become an important technique in a variety of application areas such as the Web, organizational studies, and homeland security. This paper presents a visual analytics tool, OntoVis, for understanding large, heterogeneous social networks, in which nodes and links could represent different concepts and relations, respectively. These concepts and relations are related through an ontology (also known as a schema). OntoVis is named such because it uses information in the ontology associated with a social network to semantically prune a large, heterogeneous network. In addition to semantic abstraction, OntoVis also allows users to do structural abstraction and importance filtering to make large networks manageable and to facilitate analytic reasoning. All these unique capabilities of OntoVis are illustrated with several case studies.

  19. Empathy, Pain and Attention: Cues that Predict Pain Stimulation to the Partner and the Self Capture Visual Attention

    PubMed Central

    Wu, Lingdan; Kirmse, Ursula; Flaisch, Tobias; Boiandina, Ganna; Kenter, Anna; Schupp, Harald T.

    2017-01-01

    Empathy motivates helping and cooperative behaviors and plays an important role in social interactions and personal communication. The present research examined the hypothesis that a state of empathy guides attention towards stimuli significant to others in a similar way as to stimuli relevant to the self. Sixteen couples in romantic partnerships were examined in a pain-related empathy paradigm including an anticipation phase and a stimulation phase. Abstract visual symbols (i.e., arrows and flashes) signaled the delivery of a Pain or Nopain stimulus to the partner or the self while dense sensor event-related potentials (ERPs) were simultaneously recorded from both persons. During the anticipation phase, stimuli predicting Pain compared to Nopain stimuli to the partner elicited a larger early posterior negativity (EPN) and late positive potential (LPP), which were similar in topography and latency to the EPN and LPP modulations elicited by stimuli signaling pain for the self. Noteworthy, using abstract cue symbols to cue Pain and Nopain stimuli suggests that these effects are not driven by perceptual features. The findings demonstrate that symbolic stimuli relevant for the partner capture attention, which implies a state of empathy to the pain of the partner. From a broader perspective, states of empathy appear to regulate attention processing according to the perceived needs and goals of the partner. PMID:28979199

  20. Comparing abstract numerical and visual depictions of risk in survey of parental assessment of risk in sickle cell hydroxyurea treatment.

    PubMed

    Patterson, Chavis A; Barakat, Lamia P; Henderson, Phyllis K; Nall, Faith; Westin, Anna; Dampier, Carlton D; Hsu, Lewis L

    2011-01-01

    Communicating risk is an important activity in medical decision-making; yet, numeracy is not a universal skill among the American public. We examined the hypothesis that numerical risk information about the use of hydroxyurea for children with sickle cell disease would elicit different risk assessment responses when visual depictions were used instead of abstract numbers and depending on the disease severity. Parents of 81 children with sickle cell disease participated in a survey in which hydroxyurea was first described as carrying a certain chance of risk for both birth defects and cancer. Then, the parents indicated the highest risk at which they would hypothetically consent to the treatment to help their child. Risk presentations were repeated with abstract numerical, pie graph, and 1000 people histogram formats. The χ analyses comparing high-risk to low-risk assessment across presentation formats showed high consistency between visual depictions but low consistency of abstract numerical with visual depictions. The parents of children with SC and other less severe types of SCD were less willing to accept higher risk than those with SS when the data were presented numerically. Given earlier concerns about poor "numeracy" in the US population, visual depictions of risk could be an effective tool for routine communication in health education and medical decision-making.

  1. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  2. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  3. Effects of allocation of attention on habituation to olfactory and visual food stimuli in children.

    PubMed

    Epstein, Leonard H; Saad, Frances G; Giacomelli, April M; Roemmich, James N

    2005-02-15

    Responding to food cues may be disrupted by allocating attention to other tasks. We report two experiments examining the effects of allocation of attention on salivary habituation to olfactory plus visual food cues in 8-12-year-old children. In Experiment 1, 42 children were presented with a series of 8 hamburger food stimulus presentations. During each intertrial interval, participants completed a controlled (hard), or automatic (easy) visual memory task, or no task (control). In Experiment 2, 22 children were presented with 10 presentations of a pizza food stimulus and either listened to an audiobook or no audiobook control. Results of Experiment 1 showed group differences in rate of change in salivation (p=0.014). Children in the controlled task did not habituate to repeated food cues, while children in the automatic (p<0.005) or no task (p<0.001) groups decreased responding over time. In Experiment 2, groups differed in the rate of change in salivation (p=0.004). Children in the no audiobook group habituated (p<0.001), while children in the audiobook group did not habituate. Changes in the rate of habituation when attending to non-food stimuli while eating may be a mechanism for increasing energy intake.

  4. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  5. Who is afraid of the invisible snake? Subjective visual awareness modulates posterior brain activity for evolutionarily threatening stimuli.

    PubMed

    Grassini, Simone; Holm, Suvi K; Railo, Henry; Koivisto, Mika

    2016-12-01

    Snakes were probably one of the earliest predators of primates, and snake images produce specific behavioral and electrophysiological reactions in humans. Pictures of snakes evoke enhanced activity over the occipital cortex, indexed by the "early posterior negativity" (EPN), as compared with pictures of other dangerous or non-dangerous animals. The present study investigated the possibility that the response to snake images is independent from visual awareness. The observers watched images of threatening and non-threatening animals presented in random order during rapid serial visual presentation. Four different masking conditions were used to manipulate awareness of the images. Electrophysiological results showed that the EPN was larger for snake images than for the other images employed in the unmasked condition. However, the difference disappeared when awareness of the stimuli decreased. Behavioral results on the effects of awareness did not show any advantage for snake images. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  7. Sex differences in neural responses to disgusting visual stimuli: implications for disgust-related psychiatric disorders.

    PubMed

    Caseras, Xavier; Mataix-Cols, David; An, Suk Kyoon; Lawrence, Natalia S; Speckens, Anne; Giampietro, Vincent; Brammer, Michael J; Phillips, Mary L

    2007-09-01

    A majority of patients with disgust-related psychiatric disorders such as animal phobias and contamination-related obsessive-compulsive disorder are women. The aim of this functional magnetic resonance imaging (fMRI) study was to examine possible sex differences in neural responses to disgust-inducing stimuli that might help explain this female predominance. Thirty-four healthy adult volunteers (17 women, all right-handed) were scanned while viewing alternating blocks of disgusting and neutral pictures from the International Affective Picture System. Using a partially-silent fMRI sequence, the participants rated their level of discomfort after each block of pictures. Skin conductance responses (SCR) were measured throughout the experiment. All participants completed the Disgust Scale. Both women and men reported greater subjective discomfort and showed more SCR fluctuations during the disgusting picture blocks than during the neutral picture blocks. Women and men also demonstrated a similar pattern of brain response to disgusting compared with neutral pictures, showing activation in the anterior insula, ventrolateral and dorsolateral prefrontal cortices, and visual regions. Compared with men, women had significantly higher disgust sensitivity scores, experienced more subjective discomfort, and demonstrated greater activity in left ventrolateral prefrontal regions. However, these differences were no longer significant when disgust sensitivity scores were controlled for. In healthy adult volunteers, there are significant sex-related differences in brain responses to disgusting stimuli that are irrevocably linked to greater disgust sensitivity scores in women. The implications for disgust-related psychiatric disorders are discussed.

  8. VEP Responses to Op-Art Stimuli

    PubMed Central

    O’Hare, Louise; Clarke, Alasdair D. F.; Pollux, Petra M. J.

    2015-01-01

    Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast. PMID:26422207

  9. VEP Responses to Op-Art Stimuli.

    PubMed

    O'Hare, Louise; Clarke, Alasdair D F; Pollux, Petra M J

    2015-01-01

    Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.

  10. Sound iconicity of abstract concepts: Place of articulation is implicitly associated with abstract concepts of size and social dominance.

    PubMed

    Auracher, Jan

    2017-01-01

    The concept of sound iconicity implies that phonemes are intrinsically associated with non-acoustic phenomena, such as emotional expression, object size or shape, or other perceptual features. In this respect, sound iconicity is related to other forms of cross-modal associations in which stimuli from different sensory modalities are associated with each other due to the implicitly perceived correspondence of their primal features. One prominent example is the association between vowels, categorized according to their place of articulation, and size, with back vowels being associated with bigness and front vowels with smallness. However, to date the relative influence of perceptual and conceptual cognitive processing on this association is not clear. To bridge this gap, three experiments were conducted in which associations between nonsense words and pictures of animals or emotional body postures were tested. In these experiments participants had to infer the relation between visual stimuli and the notion of size from the content of the pictures, while directly perceivable features did not support-or even contradicted-the predicted association. Results show that implicit associations between articulatory-acoustic characteristics of phonemes and pictures are mainly influenced by semantic features, i.e., the content of a picture, whereas the influence of perceivable features, i.e., size or shape, is overridden. This suggests that abstract semantic concepts can function as an interface between different sensory modalities, facilitating cross-modal associations.

  11. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults

    PubMed Central

    Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081

  12. Bank of Standardized Stimuli (BOSS) phase II: 930 new normative photos.

    PubMed

    Brodeur, Mathieu B; Guérard, Katherine; Bouras, Maria

    2014-01-01

    Researchers have only recently started to take advantage of the developments in technology and communication for sharing data and documents. However, the exchange of experimental material has not taken advantage of this progress yet. In order to facilitate access to experimental material, the Bank of Standardized Stimuli (BOSS) project was created as a free standardized set of visual stimuli accessible to all researchers, through a normative database. The BOSS is currently the largest existing photo bank providing norms for more than 15 dimensions (e.g. familiarity, visual complexity, manipulability, etc.), making the BOSS an extremely useful research tool and a mean to homogenize scientific data worldwide. The first phase of the BOSS was completed in 2010, and contained 538 normative photos. The second phase of the BOSS project presented in this article, builds on the previous phase by adding 930 new normative photo stimuli. New categories of concepts were introduced, including animals, building infrastructures, body parts, and vehicles and the number of photos in other categories was increased. All new photos of the BOSS were normalized relative to their name, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. The availability of these norms is a precious asset that should be considered for characterizing the stimuli as a function of the requirements of research and for controlling for potential confounding effects.

  13. High-intensity erotic visual stimuli de-activate the primary visual cortex in women.

    PubMed

    Huynh, Hieu K; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-06-01

    The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the pictures or movies. However, in case the volunteers perform demanding non-visual tasks, the primary visual cortex becomes de-activated, although the amount of incoming visual sensory information is the same. Do low- and high-intensity erotic movies, compared to neutral movies, produce similar de-activation of the primary visual cortex? Brain activation/de-activation was studied by Positron Emission Tomography scanning of the brains of 12 healthy heterosexual premenopausal women, aged 18-47, who watched neutral, low- and high-intensity erotic film segments. We measured differences in regional cerebral blood flow (rCBF) in the primary visual cortex during watching neutral, low-intensity erotic, and high-intensity erotic film segments. Watching high-intensity erotic, but not low-intensity erotic movies, compared to neutral movies resulted in strong de-activation of the primary (BA 17) and adjoining parts of the secondary visual cortex. The strong de-activation during watching high-intensity erotic film might represent compensation for the increased blood supply in the brain regions involved in sexual arousal, also because high-intensity erotic movies do not require precise scanning of the visual field, because the impact is clear to the observer. © 2012 International Society for Sexual Medicine.

  14. How to Make a Good Animation: A Grounded Cognition Model of How Visual Representation Design Affects the Construction of Abstract Physics Knowledge

    ERIC Educational Resources Information Center

    Chen, Zhongzhou; Gladding, Gary

    2014-01-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…

  15. Enduring critical period plasticity visualized by transcranial flavoprotein imaging in mouse primary visual cortex.

    PubMed

    Tohmi, Manavu; Kitaura, Hiroki; Komagata, Seiji; Kudoh, Masaharu; Shibuki, Katsuei

    2006-11-08

    Experience-dependent plasticity in the visual cortex was investigated using transcranial flavoprotein fluorescence imaging in mice anesthetized with urethane. On- and off-responses in the primary visual cortex were elicited by visual stimuli. Fluorescence responses and field potentials elicited by grating patterns decreased similarly as contrasts of visual stimuli were reduced. Fluorescence responses also decreased as spatial frequency of grating stimuli increased. Compared with intrinsic signal imaging in the same mice, fluorescence imaging showed faster responses with approximately 10 times larger signal changes. Retinotopic maps in the primary visual cortex and area LM were constructed using fluorescence imaging. After monocular deprivation (MD) of 4 d starting from postnatal day 28 (P28), deprived eye responses were suppressed compared with nondeprived eye responses in the binocular zone but not in the monocular zone. Imaging faithfully recapitulated a critical period for plasticity with maximal effects of MD observed around P28 and not in adulthood even under urethane anesthesia. Visual responses were compared before and after MD in the same mice, in which the skull was covered with clear acrylic dental resin. Deprived eye responses decreased after MD, whereas nondeprived eye responses increased. Effects of MD during a critical period were tested 2 weeks after reopening of the deprived eye. Significant ocular dominance plasticity was observed in responses elicited by moving grating patterns, but no long-lasting effect was found in visual responses elicited by light-emitting diode light stimuli. The present results indicate that transcranial flavoprotein fluorescence imaging is a powerful tool for investigating experience-dependent plasticity in the mouse visual cortex.

  16. Beyond arousal and valence: The importance of the biological versus social relevance of emotional stimuli

    PubMed Central

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-01-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention; memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that: a) biologically emotional images hold attention more strongly than socially emotional images, b) memory for biologically emotional images was enhanced even with limited cognitive resources, but c) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images’ subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in visual cortex and greater functional connectivity between amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between amygdala and MPFC than biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity. PMID:21964552

  17. Pain and other symptoms of CRPS can be increased by ambiguous visual stimuli--an exploratory study.

    PubMed

    Hall, Jane; Harrison, Simon; Cohen, Helen; McCabe, Candida S; Harris, N; Blake, David R

    2011-01-01

    Visual disturbance, visuo-spatial difficulties, and exacerbations of pain associated with these, have been reported by some patients with Complex Regional Pain Syndrome (CRPS). We investigated the hypothesis that some visual stimuli (i.e. those which produce ambiguous perceptions) can induce pain and other somatic sensations in people with CRPS. Thirty patients with CRPS, 33 with rheumatology conditions and 45 healthy controls viewed two images: a bistable spatial image and a control image. For each image participants recorded the frequency of percept change in 1 min and reported any changes in somatosensation. 73% of patients with CRPS reported increases in pain and/or sensory disturbances including changes in perception of the affected limb, temperature and weight changes and feelings of disorientation after viewing the bistable image. Additionally, 13% of the CRPS group responded with striking worsening of their symptoms which necessitated task cessation. Subjects in the control groups did not report pain increases or somatic sensations. It is possible to worsen the pain suffered in CRPS, and to produce other somatic sensations, by means of a visual stimulus alone. This is a newly described finding. As a clinical and research tool, the experimental method provides a means to generate and exacerbate somaesthetic disturbances, including pain, without moving the affected limb and causing nociceptive interference. This may be particularly useful for brain imaging studies. Copyright © 2010 European Federation of International Association for the Study of Pain Chapters. Published by Elsevier Ltd. All rights reserved.

  18. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  19. Read-out of emotional information from iconic memory: the longevity of threatening stimuli.

    PubMed

    Kuhbandner, Christof; Spitzer, Bernhard; Pekrun, Reinhard

    2011-05-01

    Previous research has shown that emotional stimuli are more likely than neutral stimuli to be selected by attention, indicating that the processing of emotional information is prioritized. In this study, we examined whether the emotional significance of stimuli influences visual processing already at the level of transient storage of incoming information in iconic memory, before attentional selection takes place. We used a typical iconic memory task in which the delay of a poststimulus cue, indicating which of several visual stimuli has to be reported, was varied. Performance decreased rapidly with increasing cue delay, reflecting the fast decay of information stored in iconic memory. However, although neutral stimulus information and emotional stimulus information were initially equally likely to enter iconic memory, the subsequent decay of the initially stored information was slowed for threatening stimuli, a result indicating that fear-relevant information has prolonged availability for read-out from iconic memory. This finding provides the first evidence that emotional significance already facilitates stimulus processing at the stage of iconic memory.

  20. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  1. Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli

    ERIC Educational Resources Information Center

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…

  2. Moving Stimuli Facilitate Synchronization But Not Temporal Perception

    PubMed Central

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419

  3. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    PubMed

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  4. Fear conditioning to subliminal fear relevant and non fear relevant stimuli.

    PubMed

    Lipp, Ottmar V; Kempnich, Clare; Jee, Sang Hoon; Arnold, Derek H

    2014-01-01

    A growing body of evidence suggests that conscious visual awareness is not a prerequisite for human fear learning. For instance, humans can learn to be fearful of subliminal fear relevant images--images depicting stimuli thought to have been fear relevant in our evolutionary context, such as snakes, spiders, and angry human faces. Such stimuli could have a privileged status in relation to manipulations used to suppress usually salient images from awareness, possibly due to the existence of a designated sub-cortical 'fear module'. Here we assess this proposition, and find it wanting. We use binocular masking to suppress awareness of images of snakes and wallabies (particularly cute, non-threatening marsupials). We find that subliminal presentations of both classes of image can induce differential fear conditioning. These data show that learning, as indexed by fear conditioning, is neither contingent on conscious visual awareness nor on subliminal conditional stimuli being fear relevant.

  5. Infants' Visual Localization of Visual and Auditory Targets.

    ERIC Educational Resources Information Center

    Bechtold, A. Gordon; And Others

    This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…

  6. Black–white asymmetry in visual perception

    PubMed Central

    Lu, Zhong-Lin; Sperling, George

    2012-01-01

    With eleven different types of stimuli that exercise a wide gamut of spatial and temporal visual processes, negative perturbations from mean luminance are found to be typically 25% more effective visually than positive perturbations of the same magnitude (range 8–67%). In Experiment 12, the magnitude of the black–white asymmetry is shown to be a saturating function of stimulus contrast. Experiment 13 shows black–white asymmetry primarily involves a nonlinearity in the visual representation of decrements. Black–white asymmetry in early visual processing produces even-harmonic distortion frequencies in all ordinary stimuli and in illusions such as the perceived asymmetry of optically perfect sine wave gratings. In stimuli intended to stimulate exclusively second-order processing in which motion or shape are defined not by luminance differences but by differences in texture contrast, the black–white asymmetry typically generates artifactual luminance (first-order) motion and shape components. Because black–white asymmetry pervades psychophysical and neurophysiological procedures that utilize spatial or temporal variations of luminance, it frequently needs to be considered in the design and evaluation of experiments that involve visual stimuli. Simple procedures to compensate for black–white asymmetry are proposed. PMID:22984221

  7. Reproducibility assessment of brain responses to visual food stimuli in adults with overweight and obesity.

    PubMed

    Drew Sayer, R; Tamer, Gregory G; Chen, Ningning; Tregellas, Jason R; Cornier, Marc-Andre; Kareken, David A; Talavage, Thomas M; McCrory, Megan A; Campbell, Wayne W

    2016-10-01

    The brain's reward system influences ingestive behavior and subsequently obesity risk. Functional magnetic resonance imaging (fMRI) is a common method for investigating brain reward function. This study sought to assess the reproducibility of fasting-state brain responses to visual food stimuli using BOLD fMRI. A priori brain regions of interest included bilateral insula, amygdala, orbitofrontal cortex, caudate, and putamen. Fasting-state fMRI and appetite assessments were completed by 28 women (n = 16) and men (n = 12) with overweight or obesity on 2 days. Reproducibility was assessed by comparing mean fasting-state brain responses and measuring test-retest reliability of these responses on the two testing days. Mean fasting-state brain responses on day 2 were reduced compared with day 1 in the left insula and right amygdala, but mean day 1 and day 2 responses were not different in the other regions of interest. With the exception of the left orbitofrontal cortex response (fair reliability), test-retest reliabilities of brain responses were poor or unreliable. fMRI-measured responses to visual food cues in adults with overweight or obesity show relatively good mean-level reproducibility but considerable within-subject variability. Poor test-retest reliability reduces the likelihood of observing true correlations and increases the necessary sample sizes for studies. © 2016 The Obesity Society.

  8. Reproducibility assessment of brain responses to visual food stimuli in adults with overweight and obesity

    PubMed Central

    Sayer, R Drew; Tamer, Gregory G; Chen, Ningning; Tregellas, Jason R; Cornier, Marc-Andre; Kareken, David A; Talavage, Thomas M; McCrory, Megan A; Campbell, Wayne W

    2016-01-01

    Objective The brain’s reward system influences ingestive behavior and subsequently, obesity risk. Functional magnetic resonance imaging (fMRI) is a common method for investigating brain reward function. We sought to assess the reproducibility of fasting-state brain responses to visual food stimuli using BOLD fMRI. Methods A priori brain regions of interest included bilateral insula, amygdala, orbitofrontal cortex, caudate, and putamen. Fasting-state fMRI and appetite assessments were completed by 28 women (n=16) and men (n=12) with overweight or obesity on 2 days. Reproducibility was assessed by comparing mean fasting-state brain responses and measuring test-retest reliability of these responses on the 2 testing days. Results Mean fasting-state brain responses on Day 2 were reduced compared to Day 1 in the left insula and right amygdala, but mean Day 1 and Day 2 responses were not different in the other regions of interest. With the exception of the left orbitofrontal cortex response (fair reliability), test-retest reliabilities of brain responses were poor or unreliable. Conclusion fMRI-measured responses to visual food cues in adults with overweight or obesity show relatively good mean-level reproducibility, but considerable within-subject variability. Poor test-retest reliability reduces the likelihood of observing true correlations and increases the necessary sample sizes for studies. PMID:27542906

  9. Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions.

    PubMed

    Lithari, C; Frantzidis, C A; Papadelis, C; Vivas, Ana B; Klados, M A; Kourtidou-Papadeli, C; Pappas, C; Ioannides, A A; Bamidis, P D

    2010-03-01

    Men and women seem to process emotions and react to them differently. Yet, few neurophysiological studies have systematically investigated gender differences in emotional processing. Here, we studied gender differences using Event Related Potentials (ERPs) and Skin Conductance Responses (SCR) recorded from participants who passively viewed emotional pictures selected from the International Affective Picture System (IAPS). The arousal and valence dimension of the stimuli were manipulated orthogonally. The peak amplitude and peak latency of ERP components and SCR were analyzed separately, and the scalp topographies of significant ERP differences were documented. Females responded with enhanced negative components (N100 and N200), in comparison to males, especially to the unpleasant visual stimuli, whereas both genders responded faster to high arousing or unpleasant stimuli. Scalp topographies revealed more pronounced gender differences on central and left hemisphere areas. Our results suggest a difference in the way emotional stimuli are processed by genders: unpleasant and high arousing stimuli evoke greater ERP amplitudes in women relatively to men. It also seems that unpleasant or high arousing stimuli are temporally prioritized during visual processing by both genders.

  10. Teaching with Concrete and Abstract Visual Representations: Effects on Students' Problem Solving, Problem Representations, and Learning Perceptions

    ERIC Educational Resources Information Center

    Moreno, Roxana; Ozogul, Gamze; Reisslein, Martin

    2011-01-01

    In 3 experiments, we examined the effects of using concrete and/or abstract visual problem representations during instruction on students' problem-solving practice, near transfer, problem representations, and learning perceptions. In Experiments 1 and 2, novice students learned about electrical circuit analysis with an instructional program that…

  11. Abstract numerical discrimination learning in rats.

    PubMed

    Taniuchi, Tohru; Sugihara, Junko; Wakashima, Mariko; Kamijo, Makiko

    2016-06-01

    In this study, we examined rats' discrimination learning of the numerical ordering positions of objects. In Experiments 1 and 2, five out of seven rats successfully learned to respond to the third of six identical objects in a row and showed reliable transfer of this discrimination to novel stimuli after being trained with three different training stimuli. In Experiment 3, the three rats from Experiment 2 continued to be trained to respond to the third object in an object array, which included an odd object that needed to be excluded when identifying the target third object. All three rats acquired this selective-counting task of specific stimuli, and two rats showed reliable transfer of this selective-counting performance to test sets of novel stimuli. In Experiment 4, the three rats from Experiment 3 quickly learned to respond to the third stimulus in object rows consisting of either six identical or six different objects. These results offer strong evidence for abstract numerical discrimination learning in rats.

  12. Attentional bias for positive emotional stimuli: A meta-analytic investigation.

    PubMed

    Pool, Eva; Brosch, Tobias; Delplanque, Sylvain; Sander, David

    2016-01-01

    Despite an initial focus on negative threatening stimuli, researchers have more recently expanded the investigation of attentional biases toward positive rewarding stimuli. The present meta-analysis systematically compared attentional bias for positive compared with neutral visual stimuli across 243 studies (N = 9,120 healthy participants) that used different types of attentional paradigms and positive stimuli. Factors were tested that, as postulated by several attentional models derived from theories of emotion, might modulate this bias. Overall, results showed a significant, albeit modest (Hedges' g = .258), attentional bias for positive as compared with neutral stimuli. Moderator analyses revealed that the magnitude of this attentional bias varied as a function of arousal and that this bias was significantly larger when the emotional stimulus was relevant to specific concerns (e.g., hunger) of the participants compared with other positive stimuli that were less relevant to the participants' concerns. Moreover, the moderator analyses showed that attentional bias for positive stimuli was larger in paradigms that measure early, rather than late, attentional processing, suggesting that attentional bias for positive stimuli occurs rapidly and involuntarily. Implications for theories of emotion and attention are discussed. (c) 2015 APA, all rights reserved).

  13. Beyond arousal and valence: the importance of the biological versus social relevance of emotional stimuli.

    PubMed

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-03-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.

  14. Explaining the Colavita visual dominance effect.

    PubMed

    Spence, Charles

    2009-01-01

    The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.

  15. Trends in HIV Terminology: Text Mining and Data Visualization Assessment of International AIDS Conference Abstracts Over 25 Years

    PubMed Central

    2018-01-01

    Background The language encompassing health conditions can also influence behaviors that affect health outcomes. Few published quantitative studies have been conducted that evaluate HIV-related terminology changes over time. To expand this research, this study included an analysis of a dataset of abstracts presented at the International AIDS Conference (IAC) from 1989 to 2014. These abstracts reflect the global response to HIV over 25 years. Two powerful methodologies were used to evaluate the dataset: text mining to convert the unstructured information into structured data for analysis and data visualization to represent the data visually to assess trends. Objective The purpose of this project was to evaluate the evolving use of HIV-related language in abstracts presented at the IAC from 1989 to 2014. Methods Over 80,000 abstracts were obtained from the International AIDS Society and imported into a Microsoft SQL Server database for data processing and text mining analyses. A text mining module within the KNIME Analytics Platform, an open source software, was then used to mine the partially processed data to create a terminology corpus of key HIV terms. Subject matter experts grouped the terms into categories. Tableau, a data visualization software, was used to visualize the frequency metrics associated with the terms as line graphs and word clouds. The visualized dashboards were reviewed to discern changes in terminology use across IAC years. Results The major findings identify trends in HIV-related terminology over 25 years. The term “AIDS epidemic” was dominantly used from 1989 to 1991 and then declined in use. In contrast, use of the term “HIV epidemic” increased through 2014. Beginning in the mid-1990s, the term “treatment experienced” appeared with increasing frequency in the abstracts. Use of terms identifying individuals as “carriers or victims” of HIV rarely appeared after 2008. Use of the terms “HIV positive” and “HIV infected

  16. Representation of time interval entrained by periodic stimuli in the visual thalamus of pigeons

    PubMed Central

    Wang, Shu-Rong

    2017-01-01

    Animals use the temporal information from previously experienced periodic events to instruct their future behaviors. The retina and cortex are involved in such behavior, but it remains largely unknown how the thalamus, transferring visual information from the retina to the cortex, processes the periodic temporal patterns. Here we report that the luminance cells in the nucleus dorsolateralis anterior thalami (DLA) of pigeons exhibited oscillatory activities in a temporal pattern identical to the rhythmic luminance changes of repetitive light/dark (LD) stimuli with durations in the seconds-to-minutes range. Particularly, after LD stimulation, the DLA cells retained the entrained oscillatory activities with an interval closely matching the duration of the LD cycle. Furthermore, the post-stimulus oscillatory activities of the DLA cells were sustained without feedback inputs from the pallium (equivalent to the mammalian cortex). Our study suggests that the experience-dependent representation of time interval in the brain might not be confined to the pallial/cortical level, but may occur as early as at the thalamic level. PMID:29284554

  17. Sequential Ideal-Observer Analysis of Visual Discriminations.

    ERIC Educational Resources Information Center

    Geisler, Wilson S.

    1989-01-01

    A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)

  18. The Impact of Semantic Relevance and Heterogeneity of Pictorial Stimuli on Individual Brainstorming: An Extension of the SIAM Model

    ERIC Educational Resources Information Center

    Guo, Jing; McLeod, Poppy Lauretta

    2014-01-01

    Drawing upon the Search for Ideas in Associative Memory (SIAM) model as the theoretical framework, the impact of heterogeneity and topic relevance of visual stimuli on ideation performance was examined. Results from a laboratory experiment showed that visual stimuli increased productivity and diversity of idea generation, that relevance to the…

  19. Reasoning and dyslexia: is visual memory a compensatory resource?

    PubMed

    Bacon, Alison M; Handley, Simon J

    2014-11-01

    Effective reasoning is fundamental to problem solving and achievement in education and employment. Protocol studies have previously suggested that people with dyslexia use reasoning strategies based on visual mental representations, whereas non-dyslexics use abstract verbal strategies. This research presents converging evidence from experimental and individual differences perspectives. In Experiment 1, dyslexic and non-dyslexic participants were similarly accurate on reasoning problems, but scores on a measure of visual memory ability only predicted reasoning accuracy for dyslexics. In Experiment 2, a secondary task loaded visual memory resources during concurrent reasoning. Dyslexics were significantly less accurate when reasoning under conditions of high memory load and showed reduced ability to subsequently recall the visual stimuli, suggesting that the memory and reasoning tasks were competing for the same visual cognitive resource. The results are consistent with an explanation based on limitations in the verbal and executive components of working memory in dyslexia and the use of compensatory visual strategies for reasoning. There are implications for cognitive activities that do not readily support visual thinking, whether in education, employment or less formal everyday settings. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Neural reactivity to visual food stimuli is reduced in some areas of the brain during evening hours compared to morning hours: an fMRI study in women.

    PubMed

    Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; LeCheminant, James D

    2016-03-01

    The extent that neural responsiveness to visual food stimuli is influenced by time of day is not well examined. Using a crossover design, 15 healthy women were scanned using fMRI while presented with low- and high-energy pictures of food, once in the morning (6:30-8:30 am) and once in the evening (5:00-7:00 pm). Diets were identical on both days of the fMRI scans and were verified using weighed food records. Visual analog scales were used to record subjective perception of hunger and preoccupation with food prior to each fMRI scan. Six areas of the brain showed lower activation in the evening to both high- and low-energy foods, including structures in reward pathways (P < 0.05). Nine brain regions showed significantly higher activation for high-energy foods compared to low-energy foods (P < 0.05). High-energy food stimuli tended to produce greater fMRI responses than low-energy food stimuli in specific areas of the brain, regardless of time of day. However, evening scans showed a lower response to both low- and high-energy food pictures in some areas of the brain. Subjectively, participants reported no difference in hunger by time of day (F = 1.84, P = 0.19), but reported they could eat more (F = 4.83, P = 0.04) and were more preoccupied with thoughts of food (F = 5.51, P = 0.03) in the evening compared to the morning. These data underscore the role that time of day may have on neural responses to food stimuli. These results may also have clinical implications for fMRI measurement in order to prevent a time of day bias.

  1. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.

    PubMed

    Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H

    2013-07-01

    Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.

  2. Visual novel stimuli in an ERP novelty oddball paradigm: effects of familiarity on repetition and recognition memory.

    PubMed

    Cycowicz, Yael M; Friedman, David

    2007-01-01

    The orienting response, the brain's reaction to novel and/or out of context familiar events, is reflected by the novelty P3 of the ERP. Contextually novel events also engender high rates of recognition memory. We examined, under incidental and intentional conditions, the effects of visual symbol familiarity on the novelty P3 recorded during an oddball task and on the parietal episodic memory (EM) effect, an index of recollection. Repetition of familiar, but not unfamiliar, symbols elicited a reduction in the novelty P3. Better recognition performance for the familiar symbols was associated with a robust parietal EM effect, which was absent for the unfamiliar symbols in the incidental task. These data demonstrate that processing of novel events depends on expectation and whether stimuli have preexisting representations in long-term semantic memory.

  3. Visual-auditory integration during speech imitation in autism.

    PubMed

    Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.

  4. Teaching children with autism spectrum disorder to tact olfactory stimuli.

    PubMed

    Dass, Tina K; Kisamore, April N; Vladescu, Jason C; Reeve, Kenneth F; Reeve, Sharon A; Taylor-Santa, Catherine

    2018-05-28

    Research on tact acquisition by children with autism spectrum disorder (ASD) has often focused on teaching participants to tact visual stimuli. It is important to evaluate procedures for teaching tacts of nonvisual stimuli (e.g., olfactory, tactile). The purpose of the current study was to extend the literature on secondary target instruction and tact training by evaluating the effects of a discrete-trial instruction procedure involving (a) echoic prompts, a constant prompt delay, and error correction for primary targets; (b) inclusion of secondary target stimuli in the consequent portion of learning trials; and (c) multiple exemplar training on the acquisition of item tacts of olfactory stimuli, emergence of category tacts of olfactory stimuli, generalization of category tacts, and emergence of category matching, with three children diagnosed with ASD. Results showed that all participants learned the item and category tacts following teaching, participants demonstrated generalization across category tacts, and category matching emerged for all participants. © 2018 Society for the Experimental Analysis of Behavior.

  5. Increasing Valid Profiles in Phallometric Assessment of Sex Offenders with Child Victims: Combining the Strengths of Audio Stimuli and Synthetic Characters.

    PubMed

    Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice

    2018-02-01

    Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.

  6. The Role of Visual Eccentricity on Preference for Abstract Symmetry

    PubMed Central

    O’ Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity. PMID:27124081

  7. The Role of Visual Eccentricity on Preference for Abstract Symmetry.

    PubMed

    Rampone, Giulia; O' Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity.

  8. Gestalt perception modulates early visual processing.

    PubMed

    Herrmann, C S; Bosch, V

    2001-04-17

    We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.

  9. Distractor devaluation requires visual working memory.

    PubMed

    Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E

    2009-02-01

    Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.

  10. Cuttlefish Sepia officinalis Preferentially Respond to Bottom Rather than Side Stimuli When Not Allowed Adjacent to Tank Walls

    PubMed Central

    Taniguchi, Darcy A. A.; Gagnon, Yakir; Wheeler, Benjamin R.; Johnsen, Sönke; Jaffe, Jules S.

    2015-01-01

    Cuttlefish are cephalopods capable of rapid camouflage responses to visual stimuli. However, it is not always clear to what these animals are responding. Previous studies have found cuttlefish to be more responsive to lateral stimuli rather than substrate. However, in previous works, the cuttlefish were allowed to settle next to the lateral stimuli. In this study, we examine whether juvenile cuttlefish (Sepia officinalis) respond more strongly to visual stimuli seen on the sides versus the bottom of an experimental aquarium, specifically when the animals are not allowed to be adjacent to the tank walls. We used the Sub Sea Holodeck, a novel aquarium that employs plasma display screens to create a variety of artificial visual environments without disturbing the animals. Once the cuttlefish were acclimated, we compared the variability of camouflage patterns that were elicited from displaying various stimuli on the bottom versus the sides of the Holodeck. To characterize the camouflage patterns, we classified them in terms of uniform, disruptive, and mottled patterning. The elicited camouflage patterns from different bottom stimuli were more variable than those elicited by different side stimuli, suggesting that S. officinalis responds more strongly to the patterns displayed on the bottom than the sides of the tank. We argue that the cuttlefish pay more attention to the bottom of the Holodeck because it is closer and thus more relevant for camouflage. PMID:26465786

  11. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  12. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  13. Neural Mechanisms of Selective Visual Attention.

    PubMed

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  14. Newborn infants perceive abstract numbers

    PubMed Central

    Izard, Véronique; Sann, Coralie; Spelke, Elizabeth S.; Streri, Arlette

    2009-01-01

    Although infants and animals respond to the approximate number of elements in visual, auditory, and tactile arrays, only human children and adults have been shown to possess abstract numerical representations that apply to entities of all kinds (e.g., 7 samurai, seas, or sins). Do abstract numerical concepts depend on language or culture, or do they form a part of humans' innate, core knowledge? Here we show that newborn infants spontaneously associate stationary, visual-spatial arrays of 4–18 objects with auditory sequences of events on the basis of number. Their performance provides evidence for abstract numerical representations at the start of postnatal experience. PMID:19520833

  15. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    PubMed

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Asymmetric top-down modulation of ascending visual pathways in pigeons.

    PubMed

    Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur

    2016-03-01

    Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.

  17. Familiarity and Attraction to Stimuli: Developmental Change or Methodological Artifact?

    ERIC Educational Resources Information Center

    Kail, Robert V., Jr.

    1974-01-01

    Investigates whether procedural differences or developmental changes account for the ambiguous results obtained in previous research on the affective consequences of mere exposure to visual stimuli with 7-, 9-, and 11-year-old children. (Author/ED)

  18. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    PubMed

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  19. Trends in HIV Terminology: Text Mining and Data Visualization Assessment of International AIDS Conference Abstracts Over 25 Years.

    PubMed

    Dancy-Scott, Nicole; Dutcher, Gale A; Keselman, Alla; Hochstein, Colette; Copty, Christina; Ben-Senia, Diane; Rajan, Sampada; Asencio, Maria Guadalupe; Choi, Jason Jongwon

    2018-05-04

    The language encompassing health conditions can also influence behaviors that affect health outcomes. Few published quantitative studies have been conducted that evaluate HIV-related terminology changes over time. To expand this research, this study included an analysis of a dataset of abstracts presented at the International AIDS Conference (IAC) from 1989 to 2014. These abstracts reflect the global response to HIV over 25 years. Two powerful methodologies were used to evaluate the dataset: text mining to convert the unstructured information into structured data for analysis and data visualization to represent the data visually to assess trends. The purpose of this project was to evaluate the evolving use of HIV-related language in abstracts presented at the IAC from 1989 to 2014. Over 80,000 abstracts were obtained from the International AIDS Society and imported into a Microsoft SQL Server database for data processing and text mining analyses. A text mining module within the KNIME Analytics Platform, an open source software, was then used to mine the partially processed data to create a terminology corpus of key HIV terms. Subject matter experts grouped the terms into categories. Tableau, a data visualization software, was used to visualize the frequency metrics associated with the terms as line graphs and word clouds. The visualized dashboards were reviewed to discern changes in terminology use across IAC years. The major findings identify trends in HIV-related terminology over 25 years. The term "AIDS epidemic" was dominantly used from 1989 to 1991 and then declined in use. In contrast, use of the term "HIV epidemic" increased through 2014. Beginning in the mid-1990s, the term "treatment experienced" appeared with increasing frequency in the abstracts. Use of terms identifying individuals as "carriers or victims" of HIV rarely appeared after 2008. Use of the terms "HIV positive" and "HIV infected" peaked in the early-1990s and then declined in use. The terms

  20. Neuronal responses to face-like stimuli in the monkey pulvinar.

    PubMed

    Nguyen, Minh Nui; Hori, Etsuro; Matsumoto, Jumpei; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    The pulvinar nuclei appear to function as the subcortical visual pathway that bypasses the striate cortex, rapidly processing coarse facial information. We investigated responses from monkey pulvinar neurons during a delayed non-matching-to-sample task, in which monkeys were required to discriminate five categories of visual stimuli [photos of faces with different gaze directions, line drawings of faces, face-like patterns (three dark blobs on a bright oval), eye-like patterns and simple geometric patterns]. Of 401 neurons recorded, 165 neurons responded differentially to the visual stimuli. These visual responses were suppressed by scrambling the images. Although these neurons exhibited a broad response latency distribution, face-like patterns elicited responses with the shortest latencies (approximately 50 ms). Multidimensional scaling analysis indicated that the pulvinar neurons could specifically encode face-like patterns during the first 50-ms period after stimulus onset and classify the stimuli into one of the five different categories during the next 50-ms period. The amount of stimulus information conveyed by the pulvinar neurons and the number of stimulus-differentiating neurons were consistently higher during the second 50-ms period than during the first 50-ms period. These results suggest that responsiveness to face-like patterns during the first 50-ms period might be attributed to ascending inputs from the superior colliculus or the retina, while responsiveness to the five different stimulus categories during the second 50-ms period might be mediated by descending inputs from cortical regions. These findings provide neurophysiological evidence for pulvinar involvement in social cognition and, specifically, rapid coarse facial information processing. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  1. Does dorsolateral prefrontal cortex (DLPFC) activation return to baseline when sexual stimuli cease? The role of DLPFC in visual sexual stimulation.

    PubMed

    Leon-Carrion, Jose; Martín-Rodríguez, Juan Francisco; Damas-López, Jesús; Pourrezai, Kambiz; Izzetoglu, Kurtulus; Barroso Y Martin, Juan Manuel; Dominguez-Morales, M Rosario

    2007-04-06

    A fundamental question in human sexuality regards the neural substrate underlying sexually-arousing representations. Lesion and neuroimaging studies suggest that dorsolateral pre-frontal cortex (DLPFC) plays an important role in regulating the processing of visual sexual stimulation. The aim of this Functional Near-Infrared Spectroscopy (fNIRS) study was to explore DLPFC structures involved in the processing of erotic and non-sexual films. fNIRS was used to image the evoked-cerebral blood oxygenation (CBO) response in 15 male and 15 female subjects. Our hypothesis is that a sexual stimulus would produce DLPFC activation during the period of direct stimulus perception ("on" period), and that this activation would continue after stimulus cessation ("off" period). A new paradigm was used to measure the relative oxygenated hemoglobin (oxyHb) concentrations in DLPFC while subjects viewed the two selected stimuli (Roman orgy and a non-sexual film clip), and also immediately following stimulus cessation. Viewing of the non-sexual stimulus produced no overshoot in DLPFC, whereas exposure to the erotic stimulus produced rapidly ascendant overshoot, which became even more pronounced following stimulus cessation. We also report on gender differences in the timing and intensity of DLPFC activation in response to a sexually explicit visual stimulus. We found evidence indicating that men experience greater and more rapid sexual arousal when exposed to erotic stimuli than do women. Our results point out that self-regulation of DLPFC activation is modulated by subjective arousal and that cognitive appraisal of the sexual stimulus (valence) plays a secondary role in this regulation.

  2. Critical role of foreground stimuli in perceiving visually induced self-motion (vection).

    PubMed

    Nakamura, S; Shimojo, S

    1999-01-01

    The effects of a foreground stimulus on vection (illusory perception of self-motion induced by a moving background stimulus) were examined in two experiments. The experiments reveal that the presentation of a foreground pattern with a moving background stimulus may affect vection. The foreground stimulus facilitated vection strength when it remained stationary or moved slowly in the opposite direction to that of the background stimulus. On the other hand, there was a strong inhibition of vection when the foreground stimulus moved slowly with, or quickly against, the background. These results suggest that foreground stimuli, as well as background stimuli, play an important role in perceiving self-motion.

  3. Arbitrary conditional discriminative functions of meaningful stimuli and enhanced equivalence class formation.

    PubMed

    Nedelcu, Roxana I; Fields, Lanny; Arntzen, Erik

    2015-03-01

    Equivalence class formation by college students was influenced through the prior acquisition of conditional discriminative functions by one of the abstract stimuli (C) in the to-be-formed classes. Participants in the GR-0, GR-1, and GR-5 groups attempted to form classes under the simultaneous protocol, after mastering 0, 1, or 5 conditional relations between C and other abstract stimuli (V, W, X, Y, Z) that were not included in the to-be-formed classes (ABCDE). Participants in the GR-many group attempted to form classes that contained four abstract stimuli and one meaningful picture as the C stimulus. In the GR-0, GR-1, GR-5, and GR-many groups, classes were formed by 17, 25, 58, and 67% of participants, respectively. Thus, likelihood of class formation was enhanced by the prior formation of five C-based conditional relations (the GR-5 vs. GR-0 condition), or the inclusion of a meaningful stimulus as a class member (the GR-many vs. GR-0 condition). The GR-5 and GR-many conditions produced very similar yields, indicating that class formation was enhanced to a similar degree by including a meaningful stimulus or an abstract stimulus that had become a member of five conditional relations prior to equivalence class formation. Finally, the low and high yields produced by the GR-1 and GR-5 conditions showed that the class enhancement effect of the GR-5 condition was due to the number of conditional relations established during preliminary training and not to the sheer amount of reinforcement provided while learning these conditional relations. Class enhancement produced by meaningful stimuli, then, can be attributed to their acquired conditional discriminative functions as well as their discriminative, connotative, and denotative properties. © Society for the Experimental Analysis of Behavior.

  4. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Interpretative bias in spider phobia: Perception and information processing of ambiguous schematic stimuli.

    PubMed

    Haberkamp, Anke; Schmidt, Filipp

    2015-09-01

    This study investigates the interpretative bias in spider phobia with respect to rapid visuomotor processing. We compared perception, evaluation, and visuomotor processing of ambiguous schematic stimuli between spider-fearful and control participants. Stimuli were produced by gradually morphing schematic flowers into spiders. Participants rated these stimuli related to their perceptual appearance and to their feelings of valence, disgust, and arousal. Also, they responded to the same stimuli within a response priming paradigm that measures rapid motor activation. Spider-fearful individuals showed an interpretative bias (i.e., ambiguous stimuli were perceived as more similar to spiders) and rated spider-like stimuli as more unpleasant, disgusting, and arousing. However, we observed no differences between spider-fearful and control participants in priming effects for ambiguous stimuli. For non-ambiguous stimuli, we observed a similar enhancement for phobic pictures as has been reported previously for natural images. We discuss our findings with respect to the visual representation of morphed stimuli and to perceptual learning processes. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Attentional Bias for Emotional Stimuli in Borderline Personality Disorder: A Meta-Analysis.

    PubMed

    Kaiser, Deborah; Jacob, Gitta A; Domes, Gregor; Arntz, Arnoud

    2016-01-01

    In borderline personality disorder (BPD), attentional bias (AB) to emotional stimuli may be a core component in disorder pathogenesis and maintenance. 11 emotional Stroop task (EST) studies with 244 BPD patients, 255 nonpatients (NPs) and 95 clinical controls and 4 visual dot-probe task (VDPT) studies with 151 BPD patients or subjects with BPD features and 62 NPs were included. We conducted two separate meta-analyses for AB in BPD. One meta-analysis focused on the EST for generally negative and BPD-specific/personally relevant negative words. The other meta-analysis concentrated on the VDPT for negative and positive facial stimuli. There is evidence for an AB towards generally negative emotional words compared to NPs (standardized mean difference, SMD = 0.311) and to other psychiatric disorders (SMD = 0.374) in the EST studies. Regarding BPD-specific/personally relevant negative words, BPD patients reveal an even stronger AB than NPs (SMD = 0.454). The VDPT studies indicate a tendency towards an AB to positive facial stimuli but not negative stimuli in BPD patients compared to NPs. The findings rather reflect an AB in BPD to generally negative and BPD-specific/personally relevant negative words rather than an AB in BPD towards facial stimuli, and/or a biased allocation of covert attentional resources to negative emotional stimuli in BPD and not a bias in focus of visual attention. Further research regarding the role of childhood traumatization and comorbid anxiety disorders may improve the understanding of these underlying processes. © 2016 The Author(s) Published by S. Karger AG, Basel.

  7. Information from Multiple Modalities Helps 5-Month-Olds Learn Abstract Rules

    ERIC Educational Resources Information Center

    Frank, Michael C.; Slemmer, Jonathan A.; Marcus, Gary F.; Johnson, Scott P.

    2009-01-01

    By 7 months of age, infants are able to learn rules based on the abstract relationships between stimuli ( Marcus et al., 1999 ), but they are better able to do so when exposed to speech than to some other classes of stimuli. In the current experiments we ask whether multimodal stimulus information will aid younger infants in identifying abstract…

  8. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  9. Visual response time to colored stimuli in peripheral retina - Evidence for binocular summation

    NASA Technical Reports Server (NTRS)

    Haines, R. F.

    1977-01-01

    Simple onset response time (RT) experiments, previously shown to exhibit binocular summation effects for white stimuli along the horizontal meridian, were performed for red and green stimuli along 5 oblique meridians. Binocular RT was significantly shorter than monocular RT for a 45-min-diameter spot of red, green, or white light within eccentricities of about 50 deg from the fovea. Relatively large meridian differences were noted that appear to be due to the degree to which the images fall on corresponding retinal areas.

  10. Separability of Abstract-Category and Specific-Exemplar Visual Object Subsystems: Evidence from fMRI Pattern Analysis

    PubMed Central

    McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.

    2014-01-01

    Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436

  11. Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT

    PubMed Central

    Xiao, Jianbo

    2015-01-01

    Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for

  12. Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry

    PubMed Central

    O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte

    2013-01-01

    Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536

  13. Functional specialization and generalization for grouping of stimuli based on colour and motion

    PubMed Central

    Zeki, Semir; Stutters, Jonathan

    2013-01-01

    This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. PMID:23415950

  14. Visual and vestibular components of motion sickness.

    PubMed

    Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D

    1996-10-01

    The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.

  15. Is improved contrast sensitivity a natural consequence of visual training?

    PubMed Central

    Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.

    2015-01-01

    Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736

  16. Subliminal and supraliminal processing of reward-related stimuli in anorexia nervosa.

    PubMed

    Boehm, I; King, J A; Bernardoni, F; Geisler, D; Seidel, M; Ritschel, F; Goschke, T; Haynes, J-D; Roessner, V; Ehrlich, S

    2018-04-01

    Previous studies have highlighted the role of the brain reward and cognitive control systems in the etiology of anorexia nervosa (AN). In an attempt to disentangle the relative contribution of these systems to the disorder, we used functional magnetic resonance imaging (fMRI) to investigate hemodynamic responses to reward-related stimuli presented both subliminally and supraliminally in acutely underweight AN patients and age-matched healthy controls (HC). fMRI data were collected from a total of 35 AN patients and 35 HC, while they passively viewed subliminally and supraliminally presented streams of food, positive social, and neutral stimuli. Activation patterns of the group × stimulation condition × stimulus type interaction were interrogated to investigate potential group differences in processing different stimulus types under the two stimulation conditions. Moreover, changes in functional connectivity were investigated using generalized psychophysiological interaction analysis. AN patients showed a generally increased response to supraliminally presented stimuli in the inferior frontal junction (IFJ), but no alterations within the reward system. Increased activation during supraliminal stimulation with food stimuli was observed in the AN group in visual regions including superior occipital gyrus and the fusiform gyrus/parahippocampal gyrus. No group difference was found with respect to the subliminal stimulation condition and functional connectivity. Increased IFJ activation in AN during supraliminal stimulation may indicate hyperactive cognitive control, which resonates with clinical presentation of excessive self-control in AN patients. Increased activation to food stimuli in visual regions may be interpreted in light of an attentional food bias in AN.

  17. ERP Modulation during Observation of Abstract Paintings by Franz Kline

    PubMed Central

    Sbriscia-Fioretti, Beatrice; Berchio, Cristina; Freedberg, David; Gallese, Vittorio; Umiltà, Maria Alessandra

    2013-01-01

    The aim of this study was to test the involvement of sensorimotor cortical circuits during the beholding of the static consequences of hand gestures devoid of any meaning.In order to verify this hypothesis we performed an EEG experiment presenting to participants images of abstract works of art with marked traces of brushstrokes. The EEG data were analyzed by using Event Related Potentials (ERPs). We aimed to demonstrate a direct involvement of sensorimotor cortical circuits during the beholding of these selected works of abstract art. The stimuli consisted of three different abstract black and white paintings by Franz Kline. Results verified our experimental hypothesis showing the activation of premotor and motor cortical areas during stimuli observation. In addition, abstract works of art observation elicited the activation of reward-related orbitofrontal areas, and cognitive categorization-related prefrontal areas. The cortical sensorimotor activation is a fundamental neurophysiological demonstration of the direct involvement of the cortical motor system in perception of static meaningless images belonging to abstract art. These results support the role of embodied simulation of artist’s gestures in the perception of works of art. PMID:24130693

  18. Visual Spectroscopy of R Scuti (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Undreiu, L.; Chapman, A.

    2015-06-01

    (Abstract only) We are currently conducting a visual spectral analysis of the brightest known RV Tauri variable star, R Scuti. The goal of our undergraduate research project is to investigate this variable star's erratic nature by collecting spectra at different times in its cycle. Starting in late June of 2014 and proceeding into the following four months, we have monitored the alterations in the spectral characteristics that accompany the progression of R Sct's irregular cycle. During this time, we were given the opportunity to document the star's most recent descent from maximum brightness V~5 to a relatively deep minimum of V~7.5. Analysis of the data taken during the star's period of declining magnitude has provided us with several interesting findings that concur with the observations of more technically sophisticated studies. Following their collection, we compared our observations and findings with archived material in the hopes of facilitating a better understanding of the physical state of RV Tauri stars and the perplexing nature of their evolution. Although identification of the elements in the star's bright phase proved to be challenging, documenting clear absorption features in its fainter stage was far less difficult. As previously reported in similar studies, we identified prominent TiO molecular absorption bands near R Sct's faintest state, typical of mid-M spectral type stars. In addition to these TiO absorption lines, we report the presence of many more metallic lines in the spectral profiles obtained near star's minimum. Supportive of previously published hypotheses regarding the causation of its variability, we observed significant variation in the star's spectral characteristics throughout different phases of its cycle. We are hopeful that our observations will make a meaningful contribution to existing databases and help advance our collective understanding of RV Tauri stars and their evolutionary significance.

  19. Perceived shifts of flashed stimuli by visible and invisible object motion.

    PubMed

    Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke

    2003-01-01

    Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.

  20. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    ERIC Educational Resources Information Center

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  1. Inverse target- and cue-priming effects of masked stimuli.

    PubMed

    Mattler, Uwe

    2007-02-01

    The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses are facilitated after dissimilar primes. Previous studies on inverse priming effects examined target-priming effects, which arise when the prime and the target stimuli share features that are critical for the response decision. In contrast, 3 experiments of the present study demonstrate inverse priming effects in a nonmotor cue-priming paradigm. Inverse cue-priming effects exhibited time courses comparable to inverse target-priming effects. Results suggest that inverse priming effects do not arise from specific processes of the response system but follow from operations that are more general.

  2. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the

  3. Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation

    PubMed Central

    Waterston, Michael L.; Pack, Christopher C.

    2010-01-01

    Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776

  4. Abstraction and art.

    PubMed

    Gortais, Bernard

    2003-07-29

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music.

  5. Conditioning with compound stimuli in Drosophila melanogaster in the flight simulator.

    PubMed

    Brembs, B; Heisenberg, M

    2001-08-01

    Short-term memory in Drosophila melanogaster operant visual learning in the flight simulator is explored using patterns and colours as a compound stimulus. Presented together during training, the two stimuli accrue the same associative strength whether or not a prior training phase rendered one of the two stimuli a stronger predictor for the reinforcer than the other (no blocking). This result adds Drosophila to the list of other invertebrates that do not exhibit the robust vertebrate blocking phenomenon. Other forms of higher-order learning, however, were detected: a solid sensory preconditioning and a small second-order conditioning effect imply that associations between the two stimuli can be formed, even if the compound is not reinforced.

  6. A case of epilepsy induced by eating or by visual stimuli of food made of minced meat.

    PubMed

    Mimura, Naoya; Inoue, Takeshi; Shimotake, Akihiro; Matsumoto, Riki; Ikeda, Akio; Takahashi, Ryosuke

    2017-08-31

    We report a 34-year-old woman with eating epilepsy induced not only by eating but also seeing foods made of minced meat. In her early 20s of age, she started having simple partial seizures (SPS) as flashback and epigastric discomfort induced by particular foods. When she was 33 years old, she developed SPS, followed by secondarily generalized tonic-clonic seizure (sGTCS) provoked by eating a hot dog, and 6 months later, only seeing the video of dumpling. We performed video electroencephalogram (EEG) monitoring while she was seeing the video of soup dumpling, which most likely caused sGTCS. Ictal EEG showed rhythmic theta activity in the left frontal to mid-temporal area, followed by generalized seizure pattern. In this patient, seizures were provoked not only by eating particular foods but also by seeing these. This suggests a form of epilepsy involving visual stimuli.

  7. Enhanced equivalence class formation by the delay and relational functions of meaningful stimuli.

    PubMed

    Arntzen, Erik; Nartey, Richard K; Fields, Lanny

    2015-05-01

    Undergraduates in six groups of 10 attempted to form three 3-node 5-member equivalence classes (A → B → C → D → E) under the simultaneous protocol. In five of six groups, all stimuli were abstract shapes; in the PIC group, C stimuli were pictures with the remainder being abstract shapes. Before class formation, participants in the Identity-S and Identity-D groups were given preliminary training to form identity conditional discriminations with the C stimuli using simultaneous and 6 s delayed matching-to-sample procedures, respectively. In the Arbitrary-S and Arbitrary-D groups, before class formation, arbitrary conditional discriminations were formed between C and X stimuli using simultaneous and 6 s delayed matching-to-sample procedures, respectively. With no preliminary training, classes in the PIC and ABS groups were formed by 80% and 0% of participants, respectively. After preliminary training, class formation (yield) increased with delay, regardless of relational type. For each of the two delays, yield was slightly greater after forming arbitrary- instead of identity-relations. Yield was greatest, however, when a class contained a meaningful stimulus (PIC). During failed class formation, probes produced experimenter-defined relations, participant-defined relations, and unsystematic responding; delay, but not the relation type in preliminary training influenced relational and indeterminate responding. These results suggest how meaningful stimuli enhance equivalence class formation. © Society for the Experimental Analysis of Behavior.

  8. [Visual perception of Japanese characters and complicated figures: developmental changes of visual P300 event-related potentials].

    PubMed

    Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko

    2002-07-01

    In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.

  9. Separability of abstract-category and specific-exemplar visual object subsystems: evidence from fMRI pattern analysis.

    PubMed

    McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J

    2015-02-01

    Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Visual memories for perceived length are well preserved in older adults.

    PubMed

    Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N

    2011-09-15

    Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Sex/Gender Differences in Neural Correlates of Food Stimuli: A Systematic Review of Functional Neuroimaging Studies

    PubMed Central

    Chao, Ariana M.; Loughead, James; Bakizada, Zayna M.; Hopkins, Christina M.; Geliebter, Allan; Gur, Ruben C.; Wadden, Thomas A.

    2017-01-01

    Sex and gender differences in food perceptions and eating behaviors have been reported in psychological and behavioral studies. The aim of this systematic review was to synthesize studies that examined sex/gender differences in neural correlates of food stimuli, as assessed by functional neuroimaging. Published studies to 2016 were retrieved and included if they used food or eating stimuli, assessed patients with functional magnetic resonance imaging (fMRI) or positron emission tomography (PET), and compared activation between males and females. Fifteen studies were identified. In response to visual food cues, females, compared to males, showed increased activation in the frontal, limbic, and striatal areas of the brain as well as the fusiform gyrus. Differences in neural response to gustatory stimuli were inconsistent. This body of literature suggests that females may be more reactive to visual food stimuli. However, findings are based on a small number of studies and additional research is needed to establish a more definitive explanation and conclusion. PMID:28371180

  12. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  13. The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.

    PubMed

    van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R

    2018-05-04

    Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  14. Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.

    PubMed

    Isomura, Tomoko; Nakano, Tamami

    2016-12-14

    Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present. © 2016 The Author(s).

  15. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  16. Rescuing Stimuli from Invisibility: Inducing a Momentary Release from Visual Masking with Pre-Target Entrainment

    ERIC Educational Resources Information Center

    Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele; Beck, Diane M.; Lleras, Alejandro

    2010-01-01

    At near-threshold levels of stimulation, identical stimulus parameters can result in very different phenomenal experiences. Can we manipulate which stimuli reach consciousness? Here we show that consciousness of otherwise masked stimuli can be experimentally induced by sensory entrainment. We preceded a backward-masked stimulus with a series of…

  17. Lack of sleep affects the evaluation of emotional stimuli.

    PubMed

    Tempesta, Daniela; Couyoumdjian, Alessandro; Curcio, Giuseppe; Moroni, Fabio; Marzano, Cristina; De Gennaro, Luigi; Ferrara, Michele

    2010-04-29

    Sleep deprivation (SD) negatively affects various cognitive performances, but surprisingly evidence about a specific impact of sleep loss on subjective evaluation of emotional stimuli remains sparse. In the present study, we assessed the effect of SD on the emotional rating of standardized visual stimuli selected from the International Affective Picture System. Forty university students were assigned to the sleep group (n=20), tested before and after one night of undisturbed sleep at home, or to the deprivation group, tested before and after one night of total SD. One-hundred and eighty pictures (90 test, 90 retest) were selected and categorized as pleasant, neutral and unpleasant. Participants were asked to judge their emotional reactions while viewing pictures by means of the Self-Assessment Manikin. Subjective mood ratings were also obtained by means of Visual Analog Scales. No significant effect of SD was observed on the evaluation of pleasant and unpleasant stimuli. On the contrary, SD subjects perceived the neutral pictures more negatively and showed an increase of negative mood and a decrease of subjective alertness compared to non-deprived subjects. Finally, an analysis of covariance on mean valence ratings of neutral pictures using negative mood as covariate confirmed the effect of SD. Our results indicate that sleep is involved in regulating emotional evaluation. The emotional labeling of neutral stimuli biased toward negative responses was not mediated by the increase of negative mood. This effect can be interpreted as an adaptive reaction supporting the "better safe than sorry" principle. It may also have applied implications for healthcare workers, military and law-enforcement personnel. Copyright 2010 Elsevier Inc. All rights reserved.

  18. Natural concepts in a juvenile gorilla (gorilla gorilla gorilla) at three levels of abstraction.

    PubMed Central

    Vonk, Jennifer; MacDonald, Suzanne E

    2002-01-01

    The extent to which nonhumans are able to form conceptual versus perceptual discriminations remains a matter of debate. Among the great apes, only chimpanzees have been tested for conceptual understanding, defined as the ability to form discriminations not based solely on simple perceptual features of stimuli, and to transfer this learning to novel stimuli. In the present investigation, a young captive female gorilla was trained at three levels of abstraction (concrete, intermediate, and abstract) involving sets of photographs representing natural categories (e.g., orangutans vs. humans, primates vs. nonprimate animals, animals vs. foods). Within each level of abstraction, when the gorilla had learned to discriminate positive from negative exemplars in one set of photographs, a novel set was introduced. Transfer was defined in terms of high accuracy during the first two sessions with the new stimuli. The gorilla acquired discriminations at all three levels of abstraction but showed unambiguous transfer only with the concrete and abstract stimulus sets. Detailed analyses of response patterns revealed little evidence of control by simple stimulus features. Acquisition and transfer involving abstract stimulus sets suggest a conceptual basis for gorilla categorization. The gorilla's relatively poor performance with intermediate-level discriminations parallels findings with pigeons, and suggests a need to reconsider the role of perceptual information in discriminations thought to indicate conceptual behavior in nonhumans. PMID:12507006

  19. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  20. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. The Presentation Location of the Reference Stimuli Affects the Left-Side Bias in the Processing of Faces and Chinese Characters

    PubMed Central

    Li, Chenglin; Cao, Xiaohua

    2017-01-01

    For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different. PMID:29018391

  2. The Presentation Location of the Reference Stimuli Affects the Left-Side Bias in the Processing of Faces and Chinese Characters.

    PubMed

    Li, Chenglin; Cao, Xiaohua

    2017-01-01

    For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different.

  3. Emotional conditioning to masked stimuli and modulation of visuospatial attention.

    PubMed

    Beaver, John D; Mogg, Karin; Bradley, Brendan P

    2005-03-01

    Two studies investigated the effects of conditioning to masked stimuli on visuospatial attention. During the conditioning phase, masked snakes and spiders were paired with a burst of white noise, or paired with an innocuous tone, in the conditioned stimulus (CS)+ and CS- conditions, respectively. Attentional allocation to the CSs was then assessed with a visual probe task, in which the CSs were presented unmasked (Experiment 1) or both unmasked and masked (Experiment 2), together with fear-irrelevant control stimuli (flowers and mushrooms). In Experiment 1, participants preferentially allocated attention to CS+ relative to control stimuli. Experiment 2 suggested that this attentional bias depended on the perceived aversiveness of the unconditioned stimulus and did not require conscious recognition of the CSs during both acquisition and expression. Copyright 2005 APA, all rights reserved.

  4. Effect of a combination of flip and zooming stimuli on the performance of a visual brain-computer interface for spelling.

    PubMed

    Cheng, Jiao; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Bei; Wang, Xingyu; Cichocki, Andrzej

    2018-02-13

    Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a "zooming" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).

  5. Peripheral visual response time and visual display layout

    NASA Technical Reports Server (NTRS)

    Haines, R. F.

    1974-01-01

    Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.

  6. Visual grouping under isoluminant condition: impact of mental fatigue

    NASA Astrophysics Data System (ADS)

    Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta

    2016-09-01

    Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.

  7. Influence of cognitive style and interstimulus interval on the hemispheric processing of tactile stimuli.

    PubMed

    Minagawa, N; Kashu, K

    1989-06-01

    16 adult subjects performed a tactile recognition task. According to our 1984 study, half of the subjects were classified as having a left hemispheric preference for the processing of visual stimuli, while the other half were classified as having a right hemispheric preference for the processing of visual stimuli. The present task was conducted according to the S1-S2 matching paradigm. The standard stimulus was a readily recognizable object and was presented tactually to either the left or right hand of each subject. The comparison stimulus was an object-picture and was presented visually by slide in a tachistoscope. The interstimulus interval was .05 sec. or 2.5 sec. Analysis indicated that the left-preference group showed right-hand superiority, and the right-preference group showed left-hand superiority. The notion of individual hemisphericity was supported in tactile processing.

  8. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short

  9. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    PubMed

    Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A

    2016-01-01

    Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  10. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    PubMed

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  11. Interobject grouping facilitates visual awareness.

    PubMed

    Stein, Timo; Kaiser, Daniel; Peelen, Marius V

    2015-01-01

    In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.

  12. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Visual processing in the central bee brain.

    PubMed

    Paulk, Angelique C; Dacks, Andrew M; Phillips-Portillo, James; Fellous, Jean-Marc; Gronenberg, Wulfila

    2009-08-12

    Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur.

  14. Visual Aversive Learning Compromises Sensory Discrimination.

    PubMed

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural

  15. Altered processing of visual emotional stimuli in posttraumatic stress disorder: an event-related potential study.

    PubMed

    Saar-Ashkenazy, Rotem; Shalev, Hadar; Kanthak, Magdalena K; Guez, Jonathan; Friedman, Alon; Cohen, Jonathan E

    2015-08-30

    Patients with posttraumatic stress disorder (PTSD) display abnormal emotional processing and bias towards emotional content. Most neurophysiological studies in PTSD found higher amplitudes of event-related potentials (ERPs) in response to trauma-related visual content. Here we aimed to characterize brain electrical activity in PTSD subjects in response to non-trauma-related emotion-laden pictures (positive, neutral and negative). A combined behavioral-ERP study was conducted in 14 severe PTSD patients and 14 controls. Response time in PTSD patients was slower compared with that in controls, irrespective to emotional valence. In both PTSD and controls, response time to negative pictures was slower compared with that to neutral or positive pictures. Upon ranking, both control and PTSD subjects similarly discriminated between pictures with different emotional valences. ERP analysis revealed three distinctive components (at ~300, ~600 and ~1000 ms post-stimulus onset) for emotional valence in control subjects. In contrast, PTSD patients displayed a similar brain response across all emotional categories, resembling the response of controls to negative stimuli. We interpret these findings as a brain-circuit response tendency towards negative overgeneralization in PTSD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Abstraction and art.

    PubMed Central

    Gortais, Bernard

    2003-01-01

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659

  17. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    PubMed Central

    Stone, David B.; Coffman, Brian A.; Bustillo, Juan R.; Aine, Cheryl J.; Stephen, Julia M.

    2014-01-01

    Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia. PMID:25414652

  18. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  19. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  20. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  1. Reproducibility and Discriminability of Brain Patterns of Semantic Categories Enhanced by Congruent Audiovisual Stimuli

    PubMed Central

    Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: “old people” and “young people.” These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration. PMID:21750692

  2. Action video game players' visual search advantage extends to biologically relevant stimuli.

    PubMed

    Chisholm, Joseph D; Kingstone, Alan

    2015-07-01

    Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Exposure to subliminal arousing stimuli induces robust activation in the amygdala, hippocampus, anterior cingulate, insular cortex and primary visual cortex: a systematic meta-analysis of fMRI studies.

    PubMed

    Brooks, S J; Savov, V; Allzén, E; Benedict, C; Fredriksson, R; Schiöth, H B

    2012-02-01

    Functional Magnetic Resonance Imaging (fMRI) demonstrates that the subliminal presentation of arousing stimuli can activate subcortical brain regions independently of consciousness-generating top-down cortical modulation loops. Delineating these processes may elucidate mechanisms for arousal, aberration in which may underlie some psychiatric conditions. Here we are the first to review and discuss four Activation Likelihood Estimation (ALE) meta-analyses of fMRI studies using subliminal paradigms. We find a maximum of 9 out of 12 studies using subliminal presentation of faces contributing to activation of the amygdala, and also a significantly high number of studies reporting activation in the bilateral anterior cingulate, bilateral insular cortex, hippocampus and primary visual cortex. Subliminal faces are the strongest modality, whereas lexical stimuli are the weakest. Meta-analyses independent of studies using Regions of Interest (ROI) revealed no biasing effect. Core neuronal arousal in the brain, which may be at first independent of conscious processing, potentially involves a network incorporating primary visual areas, somatosensory, implicit memory and conflict monitoring regions. These data could provide candidate brain regions for the study of psychiatric disorders associated with aberrant automatic emotional processing. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Magnetic stimulation of visual cortex impairs perceptual learning.

    PubMed

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Agnosia for Mirror Stimuli: A New Case Report with a Small Parietal Lesion

    PubMed Central

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Lebas, Axel; Gerardin, Emmanuel; Hannequin, Didier

    2014-01-01

    Only seven cases of agnosia for mirror stimuli have been reported, always with an extensive lesion. We report a new case of an agnosia for mirror stimuli due to a circumscribed lesion. An extensive battery of neuropsychological tests and a new experimental procedure to assess visual object mirror and orientation discrimination were assessed 10 days after the onset of clinical symptoms, and 5 years later. The performances of our patient were compared with those of four healthy control subjects matched for age. This test revealed an agnosia for mirror stimuli. Brain imaging showed a small right occipitoparietal hematoma, encompassing the extrastriate cortex adjoining the inferior parietal lobe. This new case suggests that: (i) agnosia for mirror stimuli can persist for 5 years after onset and (ii) the posterior part of the right intraparietal sulcus could be critical in the cognitive process of mirror stimuli discrimination. PMID:25037846

  6. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  7. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    PubMed

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  8. Induction of Social Behavior in Zebrafish: Live Versus Computer Animated Fish as Stimuli

    PubMed Central

    Qin, Meiying; Wong, Albert; Seguin, Diane

    2014-01-01

    Abstract The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish. PMID:24575942

  9. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    NASA Astrophysics Data System (ADS)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  10. Stress improves selective attention towards emotionally neutral left ear stimuli.

    PubMed

    Hoskin, Robert; Hunter, M D; Woodruff, P W R

    2014-09-01

    Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Effects of hemisphere speech dominance and seizure focus on patterns of behavioral response errors for three types of stimuli.

    PubMed

    Rausch, R; MacDonald, K

    1997-03-01

    We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.

  12. Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.

    PubMed

    Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J

    2017-01-01

    We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.

  13. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    PubMed

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Psychophysiological responses to drug-associated stimuli in chronic heavy cannabis use.

    PubMed

    Wölfling, Klaus; Flor, Herta; Grüsser, Sabine M

    2008-02-01

    Due to learning processes originally neutral stimuli become drug-associated and can activate an implicit drug memory, which leads to a conditioned arousing 'drug-seeking' state. This condition is accompanied by specific psychophysiological responses. The goal of the present study was the analysis of changes in cortical and peripheral reactivity to cannabis as well as alcohol-associated pictures compared with emotionally significant drug-unrelated and neutral pictures in long-term heavy cannabis users. Participants were 15 chronic heavy cannabis users and 15 healthy controls. Verbal reports as well as event-related potentials of the electroencephalogram and skin conductance responses were assessed in a cue-reactivity paradigm to determine the psychophysiological effects caused by drug-related visual stimulus material. The evaluation of self-reported craving and emotional processing showed that cannabis stimuli were perceived as more arousing and pleasant and elicited significantly more cannabis craving in cannabis users than in healthy controls. Cannabis users also demonstrated higher cannabis stimulus-induced arousal, as indicated by significantly increased skin conductance and a larger late positivity of the visual event-related brain potential. These findings support the assumption that drug-associated stimuli acquire increased incentive salience in addiction history and induce conditioned physiological patterns, which lead to craving and potentially to drug intake. The potency of visual drug-associated cues to capture attention and to activate drug-specific memory traces and accompanying physiological symptoms embedded in a cycle of abstinence and relapse--even in a 'so-called' soft drug--was assessed for the first time.

  15. Parallel Group and Sunspot Counts from SDO/HMI and AAVSO Visual Observers (Abstract)

    NASA Astrophysics Data System (ADS)

    Howe, R.; Alvestad, J.

    2015-06-01

    (Abstract only) Creating group and sunspot counts from the SDO/HMI detector on the Solar Dynamics Observatory (SDO) satellite requires software that calculates sunspots from a “white light” intensity-gram (CCD image) and group counts from a filtered CCD magneto-gram. Images from the satellite come from here http://jsoc.stanford.edu/data/hmi/images/latest/ Together these two sets of images can be used to estimate the Wolf number as W = (10g + s), which is used to calculate the American Relative index. AAVSO now has approximately two years of group and sunspot counts in the SunEntry database as SDOH observer Jan Alvestad. It is important that we compare these satellite CCD image data with our visual observer daily submissions to determine if the SDO/HMI data should be included in calculating the American Relative index. These satellite data are continuous observations with excellent seeing. This contrasts with “snapshot” earth-based observations with mixed seeing. The SDO/HIM group and sunspot counts could be considered unbiased, except that they show a not normal statistical distribution when compared to the overall visual observations, which show a Poisson distribution. One challenge that should be addressed by AAVSO using these SDO/HMI data is the splitting of groups and deriving group properties from the magneto-grams. The filtered CCD detector that creates the magento-grams is not something our visual observers can relate too, unless they were to take CCD images in H-alpha and/or the Calcium spectrum line. So, questions remain as to how these satellite CCD image counts can be integrated into the overall American Relative index.

  16. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume; Burke, Darren

    2017-02-01

    Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.

  17. A working memory bias for alcohol-related stimuli depends on drinking score.

    PubMed

    Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry

    2013-03-01

    We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  18. Causal reports: Context-dependent contributions of intuitive physics and visual impressions of launching.

    PubMed

    Vicovaro, Michele

    2018-05-01

    Everyday causal reports appear to be based on a blend of perceptual and cognitive processes. Causality can sometimes be perceived automatically through low-level visual processing of stimuli, but it can also be inferred on the basis of an intuitive understanding of the physical mechanism that underlies an observable event. We investigated how visual impressions of launching and the intuitive physics of collisions contribute to the formation of explicit causal responses. In Experiment 1, participants observed collisions between realistic objects differing in apparent material and hence implied mass, whereas in Experiment 2, participants observed collisions between abstract, non-material objects. The results of Experiment 1 showed that ratings of causality were mainly driven by the intuitive physics of collisions, whereas the results of Experiment 2 provide some support to the hypothesis that ratings of causality were mainly driven by visual impressions of launching. These results suggest that stimulus factors and experimental design factors - such as the realism of the stimuli and the variation in the implied mass of the colliding objects - may determine the relative contributions of perceptual and post-perceptual cognitive processes to explicit causal responses. A revised version of the impetus transmission heuristic provides a satisfactory explanation for these results, whereas the hypothesis that causal responses and intuitive physics are based on the internalization of physical laws does not. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study

    PubMed Central

    Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.

    2012-01-01

    Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014

  20. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    PubMed Central

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  1. Heightened eating drive and visual food stimuli attenuate central nociceptive processing.

    PubMed

    Wright, Hazel; Li, Xiaoyun; Fallon, Nicholas B; Giesbrecht, Timo; Thomas, Anna; Harrold, Joanne A; Halford, Jason C G; Stancak, Andrej

    2015-03-01

    Hunger and pain are basic drives that compete for a behavioral response when experienced together. To investigate the cortical processes underlying hunger-pain interactions, we manipulated participants' hunger and presented photographs of appetizing food or inedible objects in combination with painful laser stimuli. Fourteen healthy participants completed two EEG sessions: one after an overnight fast, the other following a large breakfast. Spatio-temporal patterns of cortical activation underlying the hunger-pain competition were explored with 128-channel EEG recordings and source dipole analysis of laser-evoked potentials (LEPs). We found that initial pain ratings were temporarily reduced when participants were hungry compared with fed. Source activity in parahippocampal gyrus was weaker when participants were hungry, and activations of operculo-insular cortex, anterior cingulate cortex, parahippocampal gyrus, and cerebellum were smaller in the context of appetitive food photographs than in that of inedible object photographs. Cortical processing of noxious stimuli in pain-related brain structures is reduced and pain temporarily attenuated when people are hungry or passively viewing food photographs, suggesting a possible interaction between the opposing motivational forces of the eating drive and pain. Copyright © 2015 the American Physiological Society.

  2. Heightened eating drive and visual food stimuli attenuate central nociceptive processing

    PubMed Central

    Li, Xiaoyun; Fallon, Nicholas B.; Giesbrecht, Timo; Thomas, Anna; Harrold, Joanne A.; Halford, Jason C. G.; Stancak, Andrej

    2014-01-01

    Hunger and pain are basic drives that compete for a behavioral response when experienced together. To investigate the cortical processes underlying hunger-pain interactions, we manipulated participants' hunger and presented photographs of appetizing food or inedible objects in combination with painful laser stimuli. Fourteen healthy participants completed two EEG sessions: one after an overnight fast, the other following a large breakfast. Spatio-temporal patterns of cortical activation underlying the hunger-pain competition were explored with 128-channel EEG recordings and source dipole analysis of laser-evoked potentials (LEPs). We found that initial pain ratings were temporarily reduced when participants were hungry compared with fed. Source activity in parahippocampal gyrus was weaker when participants were hungry, and activations of operculo-insular cortex, anterior cingulate cortex, parahippocampal gyrus, and cerebellum were smaller in the context of appetitive food photographs than in that of inedible object photographs. Cortical processing of noxious stimuli in pain-related brain structures is reduced and pain temporarily attenuated when people are hungry or passively viewing food photographs, suggesting a possible interaction between the opposing motivational forces of the eating drive and pain. PMID:25475348

  3. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  4. Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli.

    PubMed

    Scott, Ryan B; Samaha, Jason; Chrisley, Ron; Dienes, Zoltan

    2018-06-01

    While theories of consciousness differ substantially, the 'conscious access hypothesis', which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not

  5. Agnosia for mirror stimuli: a new case report with a small parietal lesion.

    PubMed

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Lebas, Axel; Gerardin, Emmanuel; Hannequin, Didier

    2014-11-01

    Only seven cases of agnosia for mirror stimuli have been reported, always with an extensive lesion. We report a new case of an agnosia for mirror stimuli due to a circumscribed lesion. An extensive battery of neuropsychological tests and a new experimental procedure to assess visual object mirror and orientation discrimination were assessed 10 days after the onset of clinical symptoms, and 5 years later. The performances of our patient were compared with those of four healthy control subjects matched for age. This test revealed an agnosia for mirror stimuli. Brain imaging showed a small right occipitoparietal hematoma, encompassing the extrastriate cortex adjoining the inferior parietal lobe. This new case suggests that: (i) agnosia for mirror stimuli can persist for 5 years after onset and (ii) the posterior part of the right intraparietal sulcus could be critical in the cognitive process of mirror stimuli discrimination. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.

    PubMed

    Zeki, Semir; Ishizu, Tomohiro

    2013-01-01

    In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.

  7. Rapid innate defensive responses of mice to looming visual stimuli.

    PubMed

    Yilmaz, Melis; Meister, Markus

    2013-10-21

    Much of brain science is concerned with understanding the neural circuits that underlie specific behaviors. While the mouse has become a favorite experimental subject, the behaviors of this species are still poorly explored. For example, the mouse retina, like that of other mammals, contains ∼20 different circuits that compute distinct features of the visual scene [1, 2]. By comparison, only a handful of innate visual behaviors are known in this species--the pupil reflex [3], phototaxis [4], the optomotor response [5], and the cliff response [6]--two of which are simple reflexes that require little visual processing. We explored the behavior of mice under a visual display that simulates an approaching object, which causes defensive reactions in some other species [7, 8]. We show that mice respond to this stimulus either by initiating escape within a second or by freezing for an extended period. The probability of these defensive behaviors is strongly dependent on the parameters of the visual stimulus. Directed experiments identify candidate retinal circuits underlying the behavior and lead the way into detailed study of these neural pathways. This response is a new addition to the repertoire of innate defensive behaviors in the mouse that allows the detection and avoidance of aerial predators. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    PubMed

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p < 0.01). The response to audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p < 0.001). Additionally, audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  9. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease

    PubMed Central

    Yang, Weiping; Ren, Yanling; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p < 0.01). The response to audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD (p < 0.001). Additionally, audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD. PMID:29850014

  10. [Unconscious Acoustical Stimuli Effects on Event-related Potentials in Humans].

    PubMed

    Kopeikina, E A; Choroshich, V V; Aleksandrov, A Y; Ivanova, V Y

    2015-01-01

    Unconscious perception essentially affects human behavior. The main results in this area obtained in experiments with visual stimuli. However, the acoustical stimuli play not less important role in behavior. The main idea of this paper is the electroencephalographic investigation of unconscious acoustical stimulation effects on electro-physiological activity of the brain. For this purpose, the event-related potentials were acquired under unconscious stimulus priming paradigm. The one syllable, three letter length, Russian words and pseudo-words with single letter substitution were used as primes and targets. As a result, we find out that repetition and alternative priming similarly affects the event-related potential's component with 200 ms latency after target application in frontal parietal and temporal areas. Under alternative priming the direction of potential amplitude modification nearby 400 ms was altered for word and semi-word targets. Alternative priming reliably increase ERP's amplitude in 400 ms locality with pseudo-word targets and decrease it under word targets. Taking into account, that all participants were unable to distinguish the applied prime stimuli, we can assume that the event-related potential changes evoked by unconscious perception of acoustical stimuli. The ERP amplitude dynamics revealed in current investigation demonstrate the opportunity of subliminal acoustical stimuli to modulate the electrical activity evoked by verbal acoustical stimulation.

  11. Hemispheric specialization for global and local processing: A direct comparison of linguistic and non-linguistic stimuli.

    PubMed

    Brederoo, Sanne G; Nieuwenstein, Mark R; Lorist, Monicque M; Cornelissen, Frans W

    2017-12-01

    It is often assumed that the human brain processes the global and local properties of visual stimuli in a lateralized fashion, with a left hemisphere (LH) specialization for local detail, and a right hemisphere (RH) specialization for global form. However, the evidence for such global-local lateralization stems predominantly from studies using linguistic stimuli, the processing of which has shown to be LH lateralized in itself. In addition, some studies have reported a reversal of global-local lateralization when using non-linguistic stimuli. Accordingly, it remains unclear whether global-local lateralization may in fact be stimulus-specific. To address this issue, we asked participants to respond to linguistic and non-linguistic stimuli that were presented in the right and left visual fields, allowing for first access by the LH and RH, respectively. The results showed global-RH and local-LH advantages for both stimulus types, but the global lateralization effect was larger for linguistic stimuli. Furthermore, this pattern of results was found to be robust, as it was observed regardless of two other task manipulations. We conclude that the instantiation and direction of global and local lateralization is not stimulus-specific. However, the magnitude of global,-but not local-, lateralization is dependent on stimulus type. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Dynamics of normalization underlying masking in human visual cortex.

    PubMed

    Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M

    2012-02-22

    Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.

  13. Postural time-to-contact as a precursor of visually induced motion sickness.

    PubMed

    Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A

    2018-06-01

    The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.

  14. Gaze-independent ERP-BCIs: augmenting performance through location-congruent bimodal stimuli

    PubMed Central

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Werkhoven, Peter

    2014-01-01

    Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bimodal BCIs may be that the user can choose which modality or modalities to attend to. We studied bimodal, visual-tactile, gaze-independent BCIs and investigated whether or not ERP components’ tAUCs and subsequent classification accuracies are increased for (1) bimodal vs. unimodal stimuli; (2) location-congruent vs. location-incongruent bimodal stimuli; and (3) attending to both modalities vs. to either one modality. We observed an enhanced bimodal (compared to unimodal) P300 tAUC, which appeared to be positively affected by location-congruency (p = 0.056) and resulted in higher classification accuracies. Attending either to one or to both modalities of the bimodal location-congruent stimuli resulted in differences between ERP components, but not in classification performance. We conclude that location-congruent bimodal stimuli improve ERP-BCIs, and offer the user the possibility to switch the attended modality without losing performance. PMID:25249947

  15. Fusion Prevents the Redundant Signals Effect: Evidence from Stereoscopically Presented Stimuli

    ERIC Educational Resources Information Center

    Schroter, Hannes; Fiedler, Anja; Miller, Jeff; Ulrich, Rolf

    2011-01-01

    In a simple reaction time (RT) experiment, visual stimuli were stereoscopically presented either to one eye (single stimulation) or to both eyes (redundant stimulation), with brightness matched for single and redundant stimulations. Redundant stimulation resulted in two separate percepts when noncorresponding retinal areas were stimulated, whereas…

  16. Threat as a feature in visual semantic object memory.

    PubMed

    Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John

    2013-08-01

    Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.

  17. Decoding complex flow-field patterns in visual working memory.

    PubMed

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Automatic attention to emotional stimuli: neural correlates.

    PubMed

    Carretié, Luis; Hinojosa, José A; Martín-Loeches, Manuel; Mercado, Francisco; Tapia, Manuel

    2004-08-01

    We investigated the capability of emotional and nonemotional visual stimulation to capture automatic attention, an aspect of the interaction between cognitive and emotional processes that has received scant attention from researchers. Event-related potentials were recorded from 37 subjects using a 60-electrode array, and were submitted to temporal and spatial principal component analyses to detect and quantify the main components, and to source localization software (LORETA) to determine their spatial origin. Stimuli capturing automatic attention were of three types: emotionally positive, emotionally negative, and nonemotional pictures. Results suggest that initially (P1: 105 msec after stimulus), automatic attention is captured by negative pictures, and not by positive or nonemotional ones. Later (P2: 180 msec), automatic attention remains captured by negative pictures, but also by positive ones. Finally (N2: 240 msec), attention is captured only by positive and nonemotional stimuli. Anatomically, this sequence is characterized by decreasing activation of the visual association cortex (VAC) and by the growing involvement, from dorsal to ventral areas, of the anterior cingulate cortex (ACC). Analyses suggest that the ACC and not the VAC is responsible for experimental effects described above. Intensity, latency, and location of neural activity related to automatic attention thus depend clearly on the stimulus emotional content and on its associated biological importance. Copyright 2004 Wiley-Liss, Inc.

  19. Neural circuits underlying visually evoked escapes in larval zebrafish

    PubMed Central

    Dunn, Timothy W.; Gebhardt, Christoph; Naumann, Eva A.; Riegler, Clemens; Ahrens, Misha B.; Engert, Florian; Del Bene, Filippo

    2015-01-01

    SUMMARY Escape behaviors deliver organisms away from imminent catastrophe. Here, we characterize behavioral responses of freely swimming larval zebrafish to looming visual stimuli simulating predators. We report that the visual system alone can recruit lateralized, rapid escape motor programs, similar to those elicited by mechanosensory modalities. Two-photon calcium imaging of retino-recipient midbrain regions isolated the optic tectum as an important center processing looming stimuli, with ensemble activity encoding the critical image size determining escape latency. Furthermore, we describe activity in retinal ganglion cell terminals and superficial inhibitory interneurons in the tectum during looming and propose a model for how temporal dynamics in tectal periventricular neurons might arise from computations between these two fundamental constituents. Finally, laser ablations of hindbrain circuitry confirmed that visual and mechanosensory modalities share the same premotor output network. Together, we establish a circuit for the processing of aversive stimuli in the context of an innate visual behavior. PMID:26804997

  20. Revealing hidden covariation detection: evidence for implicit abstraction at study.

    PubMed

    Rossnagel, C S

    2001-09-01

    Four experiments in the brain scans paradigm (P. Lewicki, T. Hill, & I. Sasaki, 1989) investigated hidden covariation detection (HCD). In Experiment 1 HCD was found in an implicit- but not in an explicit-instruction group. In Experiment 2 HCD was impaired by nonholistic perception of stimuli but not by divided attention. In Experiment 3 HCD was eliminated by interspersing stimuli that deviated from the critical covariation. In Experiment 4 a transfer procedure was used. HCD was found with dissimilar test stimuli that preserved the covariation but was almost eliminated with similar stimuli that were neutral as to the covariation. Awareness was assessed both by objective and subjective tests in all experiments. Results suggest that HCD is an effect of implicit rule abstraction and that similarity processing plays only a minor role. HCD might be suppressed by intentional search strategies that induce inappropriate aggregation of stimulus information.

  1. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In

  2. Specific excitatory connectivity for feature integration in mouse primary visual cortex

    PubMed Central

    Molina-Luna, Patricia; Roth, Morgane M.

    2017-01-01

    Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and

  3. Converging modalities ground abstract categories: the case of politics.

    PubMed

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  4. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    NASA Astrophysics Data System (ADS)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  5. Perceptual Load Alters Visual Excitability

    ERIC Educational Resources Information Center

    Carmel, David; Thorne, Jeremy D.; Rees, Geraint; Lavie, Nilli

    2011-01-01

    Increasing perceptual load reduces the processing of visual stimuli outside the focus of attention, but the mechanism underlying these effects remains unclear. Here we tested an account attributing the effects of perceptual load to modulations of visual cortex excitability. In contrast to stimulus competition accounts, which propose that load…

  6. Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance

    PubMed Central

    Veniero, Domenica

    2017-01-01

    Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794

  7. Inhibition of Return in the Visual Field

    PubMed Central

    Bao, Yan; Lei, Quan; Fang, Yuan; Tong, Yu; Schill, Kerstin; Pöppel, Ernst; Strasburger, Hans

    2013-01-01

    Inhibition of return (IOR) as an indicator of attentional control is characterized by an eccentricity effect, that is, the more peripheral visual field shows a stronger IOR magnitude relative to the perifoveal visual field. However, it could be argued that this eccentricity effect may not be an attention effect, but due to cortical magnification. To test this possibility, we examined this eccentricity effect in two conditions: the same-size condition in which identical stimuli were used at different eccentricities, and the size-scaling condition in which stimuli were scaled according to the cortical magnification factor (M-scaling), thus stimuli being larger at the more peripheral locations. The results showed that the magnitude of IOR was significantly stronger in the peripheral relative to the perifoveal visual field, and this eccentricity effect was independent of the manipulation of stimulus size (same-size or size-scaling). These results suggest a robust eccentricity effect of IOR which cannot be eliminated by M-scaling. Underlying neural mechanisms of the eccentricity effect of IOR are discussed with respect to both cortical and subcortical structures mediating attentional control in the perifoveal and peripheral visual field. PMID:23820946

  8. The “Visual Shock” of Francis Bacon: an essay in neuroesthetics

    PubMed Central

    Zeki, Semir; Ishizu, Tomohiro

    2013-01-01

    In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812

  9. Look, Snap, See: Visual Literacy through the Camera.

    ERIC Educational Resources Information Center

    Spoerner, Thomas M.

    1981-01-01

    Activities involving photographs stimulate visual perceptual awareness. Children understand visual stimuli before having verbal capacity to deal with the world. Vision becomes the primary means for learning, understanding, and adjusting to the environment. Photography can provide an effective avenue to visual literacy. (Author)

  10. Audio-Visual Speech Perception Is Special

    ERIC Educational Resources Information Center

    Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.

    2005-01-01

    In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…

  11. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    PubMed

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  12. Psychophysical and neuroimaging responses to moving stimuli in a patient with the Riddoch phenomenon due to bilateral visual cortex lesions.

    PubMed

    Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C

    2018-05-09

    Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Converging Modalities Ground Abstract Categories: The Case of Politics

    PubMed Central

    Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360

  14. Visual-Spatial Orienting in Autism.

    ERIC Educational Resources Information Center

    Wainwright, J. Ann; Bryson, Susan E.

    1996-01-01

    Visual-spatial orienting in 10 high-functioning adults with autism was examined. Compared to controls, subjects responded faster to central than to lateral stimuli, and showed a left visual field advantage for stimulus detection only when laterally presented. Abnormalities in attention shifting and coordination of attentional and motor systems are…

  15. Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex

    PubMed Central

    Singer, Wolf; Maass, Wolfgang

    2009-01-01

    It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205

  16. Environmental Interactions and Epistasis Are Revealed in the Proteomic Responses to Complex Stimuli

    PubMed Central

    Samir, Parimal; Rahul; Slaughter, James C.; Link, Andrew J.

    2015-01-01

    Ultimately, the genotype of a cell and its interaction with the environment determine the cell’s biochemical state. While the cell’s response to a single stimulus has been studied extensively, a conceptual framework to model the effect of multiple environmental stimuli applied concurrently is not as well developed. In this study, we developed the concepts of environmental interactions and epistasis to explain the responses of the S. cerevisiae proteome to simultaneous environmental stimuli. We hypothesize that, as an abstraction, environmental stimuli can be treated as analogous to genetic elements. This would allow modeling of the effects of multiple stimuli using the concepts and tools developed for studying gene interactions. Mirroring gene interactions, our results show that environmental interactions play a critical role in determining the state of the proteome. We show that individual and complex environmental stimuli behave similarly to genetic elements in regulating the cellular responses to stimuli, including the phenomena of dominance and suppression. Interestingly, we observed that the effect of a stimulus on a protein is dominant over other stimuli if the response to the stimulus involves the protein. Using publicly available transcriptomic data, we find that environmental interactions and epistasis regulate transcriptomic responses as well. PMID:26247773

  17. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval

    ERIC Educational Resources Information Center

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-01-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of…

  18. [Recognition of visual objects under forward masking. Effects of cathegorial similarity of test and masking stimuli].

    PubMed

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S

    2013-01-01

    In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.

  19. Visual adaptation and novelty responses in the superior colliculus

    PubMed Central

    Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.

    2011-01-01

    The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319

  20. Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli.

    PubMed

    Minakata, Katsumi; Gondan, Matthias

    2018-05-01

    When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds.

  1. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  3. [Sound improves distinction of low intensities of light in the visual cortex of a rabbit].

    PubMed

    Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V

    2011-01-01

    Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.

  4. Dynamic Prototypicality Effects in Visual Search

    ERIC Educational Resources Information Center

    Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan

    2011-01-01

    In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…

  5. Stimulus relevance modulates contrast adaptation in visual cortex

    PubMed Central

    Keller, Andreas J; Houlton, Rachael; Kampa, Björn M; Lesica, Nicholas A; Mrsic-Flogel, Thomas D; Keller, Georg B; Helmchen, Fritjof

    2017-01-01

    A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex. DOI: http://dx.doi.org/10.7554/eLife.21589.001 PMID:28130922

  6. Biased towards food: Electrophysiological evidence for biased attention to food stimuli.

    PubMed

    Kumar, Sanjay; Higgs, Suzanne; Rutters, Femke; Humphreys, Glyn W

    2016-12-01

    We investigated the neural mechanisms involved in bias for food stimuli in our visual environment using event related lateralized (ERL) responses. The participants were presented with a cue (food or non-food item) to either identify or hold in working memory. Subsequently, they had to search for a target in a 2-item display where target and distractor stimuli were each flanked by a picture of a food or a non-food item. The behavioural data showed that performance was strongly affected by food cues, especially when food was held in WM compared to when the cues were merely identified. The temporal dynamics of electrophysiological measures of attention (the N1pc and N2pc) showed that the orienting of attention towards food stimuli was associated with two different mechanisms; an early stage of attentional suppression followed by a later stage of attentional orienting towards food stimuli. In contrast, non-food cues were associated only with the guidance of attention to or away from cued stimuli on valid and invalid trials. The results demonstrate that food items, perhaps due to their motivational significance modulate the early orienting of attention, including an initial suppressive response to food items. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. The mere exposure effect for visual image.

    PubMed

    Inoue, Kazuya; Yagi, Yoshihiko; Sato, Nobuya

    2018-02-01

    Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.

  8. Illusory visual motion stimulus elicits postural sway in migraine patients

    PubMed Central

    Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi

    2015-01-01

    Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832

  9. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  10. VisualEyes: a modular software system for oculomotor experimentation.

    PubMed

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  11. Shades of yellow: interactive effects of visual and odour cues in a pest beetle

    PubMed Central

    Stevenson, Philip C.; Belmain, Steven R.

    2016-01-01

    Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707

  12. Do You "See'" What I "See"? Differentiation of Visual Action Words

    ERIC Educational Resources Information Center

    Dickinson, Joël; Cirelli, Laura; Szeligo, Frank

    2014-01-01

    Dickinson and Szeligo ("Can J Exp Psychol" 62(4):211--222, 2008) found that processing time for simple visual stimuli was affected by the visual action participants had been instructed to perform on these stimuli (e.g., see, distinguish). It was concluded that these effects reflected the differences in the durations of these various…

  13. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals.

    PubMed

    Ikeda, Akitsu; Miyamoto, Jun J; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain's reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze

  14. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals

    PubMed Central

    Ikeda, Akitsu; Miyamoto, Jun J.; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain’s reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias

  15. Intraindividual variability in vigilance performance: does degrading visual stimuli mimic age-related "neural noise"?

    PubMed

    MacDonald, Stuart W S; Hultsch, David F; Bunce, David

    2006-07-01

    Intraindividual performance variability, or inconsistency, has been shown to predict neurological status, physiological functioning, and age differences and declines in cognition. However, potential moderating factors of inconsistency are not well understood. The present investigation examined whether inconsistency in vigilance response latencies varied as a function of time-on-task and task demands by degrading visual stimuli in three separate conditions (10%, 20%, and 30%). Participants were 24 younger women aged 21 to 30 years (M = 24.04, SD = 2.51) and 23 older women aged 61 to 83 years (M = 68.70, SD = 6.38). A measure of within-person inconsistency, the intraindividual standard deviation (ISD), was computed for each individual across reaction time (RT) trials (3 blocks of 45 event trials) for each condition of the vigilance task. Greater inconsistency was observed with increasing stimulus degradation and age, even after controlling for group differences in mean RTs and physical condition. Further, older adults were more inconsistent than younger adults for similar degradation conditions, with ISD scores for younger adults in the 30% condition approximating estimates observed for older adults in the 10% condition. Finally, a measure of perceptual sensitivity shared increasing negative associations with ISDs, with this association further modulated as a function of age but to a lesser degree by degradation condition. Results support current hypotheses suggesting that inconsistency serves as a marker of neurological integrity and are discussed in terms of potential underlying mechanisms.

  16. Theta Oscillations in Visual Cortex Emerge with Experience to Convey Expected Reward Time and Experienced Reward Rate

    PubMed Central

    Zold, Camila L.

    2015-01-01

    The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643

  17. Art Expertise Reduces Influence of Visual Salience on Fixation in Viewing Abstract-Paintings

    PubMed Central

    Koide, Naoko; Kubo, Takatomi; Nishida, Satoshi; Shibata, Tomohiro; Ikeda, Kazushi

    2015-01-01

    When viewing a painting, artists perceive more information from the painting on the basis of their experience and knowledge than art novices do. This difference can be reflected in eye scan paths during viewing of paintings. Distributions of scan paths of artists are different from those of novices even when the paintings contain no figurative object (i.e. abstract paintings). There are two possible explanations for this difference of scan paths. One is that artists have high sensitivity to high-level features such as textures and composition of colors and therefore their fixations are more driven by such features compared with novices. The other is that fixations of artists are more attracted by salient features than those of novices and the fixations are driven by low-level features. To test these, we measured eye fixations of artists and novices during the free viewing of various abstract paintings and compared the distribution of their fixations for each painting with a topological attentional map that quantifies the conspicuity of low-level features in the painting (i.e. saliency map). We found that the fixation distribution of artists was more distinguishable from the saliency map than that of novices. This difference indicates that fixations of artists are less driven by low-level features than those of novices. Our result suggests that artists may extract visual information from paintings based on high-level features. This ability of artists may be associated with artists’ deep aesthetic appreciation of paintings. PMID:25658327

  18. Effects of Binaural Sensory Aids on the Development of Visual Perceptual Abilities in Visually Handicapped Infants. Final Report, April 15, 1982-November 15, 1982.

    ERIC Educational Resources Information Center

    Hart, Verna; Ferrell, Kay

    Twenty-four congenitally visually handicapped infants, aged 6-24 months, participated in a study to determine (1) those stimuli best able to elicit visual attention, (2) the stability of visual acuity over time, and (3) the effects of binaural sensory aids on both visual attention and visual acuity. Ss were dichotomized into visually handicapped…

  19. Visual search and attention: an overview.

    PubMed

    Davis, Elizabeth T; Palmer, John

    2004-01-01

    This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.

  20. Visually Evoked Potential Markers of Concussion History in Patients with Convergence Insufficiency

    PubMed Central

    Poltavski, Dmitri; Lederer, Paul; Cox, Laurie Kopko

    2017-01-01

    ABSTRACT Purpose We investigated whether differences in the pattern visual evoked potentials exist between patients with convergence insufficiency and those with convergence insufficiency and a history of concussion using stimuli designed to differentiate between magnocellular (transient) and parvocellular (sustained) neural pathways. Methods Sustained stimuli included 2-rev/s, 85% contrast checkerboard patterns of 1- and 2-degree check sizes, whereas transient stimuli comprised 4-rev/s, 10% contrast vertical sinusoidal gratings with column width of 0.25 and 0.50 cycles/degree. We tested two models: an a priori clinical model based on an assumption of at least a minimal (beyond instrumentation’s margin of error) 2-millisecond lag of transient response latencies behind sustained response latencies in concussed patients and a statistical model derived from the sample data. Results Both models discriminated between concussed and nonconcussed groups significantly above chance (with 76% and 86% accuracy, respectively). In the statistical model, patients with mean vertical sinusoidal grating response latencies greater than 119 milliseconds to 0.25-cycle/degree stimuli (or mean vertical sinusoidal latencies >113 milliseconds to 0.50-cycle/degree stimuli) and mean vertical sinusoidal grating amplitudes of less than 14.75 mV to 0.50-cycle/degree stimuli were classified as having had a history of concussion. The resultant receiver operating characteristic curve for this model had excellent discrimination between the concussed and nonconcussed (area under the curve = 0.857; P < .01) groups with sensitivity of 0.92 and specificity of 0.80. Conclusions The results suggest a promising electrophysiological approach to identifying individuals with convergence insufficiency and a history of concussion. PMID:28609417

  1. Reading, Comprehension, and Memory Processes: Abstracts of Doctoral Dissertations Published in "Dissertation Abstracts International," July through September 1977 (Vol. 38 Nos. 1 through 3).

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.

    This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 14 titles deal with the following topics: children's ability to read their own dictated oral language; adjunct structure and reading comprehension; learning and comprehension of simultaneously presented stimuli in children of…

  2. Visual scan paths are abnormal in deluded schizophrenics.

    PubMed

    Phillips, M L; David, A S

    1997-01-01

    One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.

  3. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    PubMed

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  4. Visual Masking During Pursuit Eye Movements

    ERIC Educational Resources Information Center

    White, Charles W.

    1976-01-01

    Visual masking occurs when one stimulus interferes with the perception of another stimulus. Investigates which matters more for visual masking--that the target and masking stimuli are flashed on the same part of the retina, or, that the target and mask appear in the same place. (Author/RK)

  5. Abstract numeric relations and the visual structure of algebra.

    PubMed

    Landy, David; Brookes, David; Smout, Ryan

    2014-09-01

    Formal algebras are among the most powerful and general mechanisms for expressing quantitative relational statements; yet, even university engineering students, who are relatively proficient with algebraic manipulation, struggle with and often fail to correctly deploy basic aspects of algebraic notation (Clement, 1982). In the cognitive tradition, it has often been assumed that skilled users of these formalisms treat situations in terms of semantic properties encoded in an abstract syntax that governs the use of notation without particular regard to the details of the physical structure of the equation itself (Anderson, 2005; Hegarty, Mayer, & Monk, 1995). We explore how the notational structure of verbal descriptions or algebraic equations (e.g., the spatial proximity of certain words or the visual alignment of numbers and symbols in an equation) plays a role in the process of interpreting or constructing symbolic equations. We propose in particular that construction processes involve an alignment of notational structures across representation systems, biasing reasoners toward the selection of formal notations that maintain the visuospatial structure of source representations. For example, in the statement "There are 5 elephants for every 3 rhinoceroses," the spatial proximity of 5 and elephants and 3 and rhinoceroses will bias reasoners to write the incorrect expression 5E = 3R, because that expression maintains the spatial relationships encoded in the source representation. In 3 experiments, participants constructed equations with given structure, based on story problems with a variety of phrasings. We demonstrate how the notational alignment approach accounts naturally for a variety of previously reported phenomena in equation construction and successfully predicts error patterns that are not accounted for by prior explanations, such as the left to right transcription heuristic.

  6. Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images

    PubMed Central

    Funahashi, Shintaro

    2016-01-01

    Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424

  7. Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.

    PubMed

    Alexander, Gerianne M; Charles, Nora

    2009-06-01

    An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.

  8. Multiaccommodative stimuli in VR systems: problems & solutions.

    PubMed

    Marran, L; Schor, C

    1997-09-01

    Virtual reality environments can introduce multiple and sometimes conflicting accommodative stimuli. For instance, with the high-powered lenses commonly used in head-mounted displays, small discrepancies in screen lens placement, caused by manufacturer error or user adjustment focus error, can change the focal depths of the image by a couple of diopters. This can introduce a binocular accommodative stimulus or, if the displacement between the two screens is unequal, an unequal (anisometropic) accommodative stimulus for the two eyes. Systems that allow simultaneous viewing of virtual and real images can also introduce a conflict in accommodative stimuli: When real and virtual images are at different focal planes, both cannot be in focus at the same time, though they may appear to be in similar locations in space. In this paper four unique designs are described that minimize the range of accommodative stimuli and maximize the visual system's ability to cope efficiently with the focus conflicts that remain: pinhole optics, monocular lens addition combined with aniso-accommodation, chromatic bifocal, and bifocal lens system. The advantages and disadvantages of each design are described and recommendation for design choice is given after consideration of the end use of the virtual reality system (e.g., low or high end, entertainment, technical, or medical use). The appropriate design modifications should allow greater user comfort and better performance.

  9. Effect of eye position during human visual-vestibular integration of heading perception.

    PubMed

    Crane, Benjamin T

    2017-09-01

    Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze

  10. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  11. What Does the Future Hold for Scientific Journals? Visual Abstracts and Other Tools for Communicating Research.

    PubMed

    Nikolian, Vahagn C; Ibrahim, Andrew M

    2017-09-01

    Journals fill several important roles within academic medicine, including building knowledge, validating quality of methods, and communicating research. This section provides an overview of these roles and highlights innovative approaches journals have taken to enhance dissemination of research. As journals move away from print formats and embrace web-based content, design-centered thinking will allow for engagement of a larger audience. Examples of recent efforts in this realm are provided, as well as simplified strategies for developing visual abstracts to improve dissemination via social media. Finally, we hone in on principles of learning and education which have driven these advances in multimedia-based communication in scientific research.

  12. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  13. Anxiety and autonomic response to social-affective stimuli in individuals with Williams syndrome.

    PubMed

    Ng, Rowena; Bellugi, Ursula; Järvinen, Anna

    2016-12-01

    Williams syndrome (WS) is a genetic condition characterized by an unusual "hypersocial" personality juxtaposed by high anxiety. Recent evidence suggests that autonomic reactivity to affective face stimuli is disorganised in WS, which may contribute to emotion dysregulation and/or social disinhibition. Electrodermal activity (EDA) and mean interbeat interval (IBI) of 25 participants with WS (19 - 57 years old) and 16 typically developing (TD; 17-43 years old) adults were measured during a passive presentation of affective face and voice stimuli. The Beck Anxiety Inventory was administered to examine associations between autonomic reactivity to social-affective stimuli and anxiety symptomatology. The WS group was characterized by higher overall anxiety symptomatology, and poorer anger recognition in social visual and aural stimuli relative to the TD group. No between-group differences emerged in autonomic response patterns. Notably, for participants with WS, increased anxiety was uniquely associated with diminished arousal to angry faces and voices. In contrast, for the TD group, no associations emerged between anxiety and physiological responsivity to social-emotional stimuli. The anxiety associated with WS appears to be intimately related to reduced autonomic arousal to angry social stimuli, which may also be linked to the characteristic social disinhibition. Copyright © 2016. Published by Elsevier Ltd.

  14. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  16. Exploring biased attention towards body-related stimuli and its relationship with body awareness.

    PubMed

    Salvato, Gerardo; De Maio, Gabriele; Bottini, Gabriella

    2017-12-08

    Stimuli of great social relevance exogenously capture attention. Here we explored the impact of body-related stimuli on endogenous attention. Additionally, we investigate the influence of internal states on biased attention towards this class of stimuli. Participants were presented with a body, face, or chair cue to hold in memory (Memory task) or to merely attend (Priming task) and, subsequently, they were asked to find a circle in an unrelated visual search task. In the valid condition, the circle was flanked by the cue. In the invalid condition, the pre-cued picture re-appeared flanking the distracter. In the neutral condition, the cue item did not re-appear in the search display. We found that although bodies and faces benefited from a general faster visual processing compared to chairs, holding them in memory did not produce any additional advantage on attention compared to when they are merely attended. Furthermore, face cues generated larger orienting effect compared to body and chairs cues in both Memory and Priming task. Importantly, results showed that individual sensitivity to internal bodily responses predicted the magnitude of the memory-based orienting of attention to bodies, shedding new light on the relationship between body awareness and visuo-spatial attention.

  17. The effect of spatio-temporal distance between visual stimuli on information processing in children with Specific Language Impairment.

    PubMed

    Dispaldro, Marco; Corradi, Nicola

    2015-01-01

    The purpose of this study is to evaluate whether children with Specific Language Impairment (SLI) have a deficit in processing a sequence of two visual stimuli (S1 and S2) presented at different inter-stimulus intervals and in different spatial locations. In particular, the core of this study is to investigate whether S1 identification is disrupted due to a retroactive interference of S2. To this aim, two experiments were planned in which children with SLI and children with typical development (TD), matched by age and non-verbal IQ, were compared (Experiment 1: SLI n=19; TD n=19; Experiment 2: SLI n=16; TD n=16). Results show group differences in the ability to identify a single stimulus surrounded by flankers (Baseline level). Moreover, children with SLI show a stronger negative interference of S2, both for temporal and spatial modulation. These results are discussed in the light of an attentional processing limitation in children with SLI. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Mirrored and rotated stimuli are not the same: A neuropsychological and lesion mapping study.

    PubMed

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Champmartin, Cécile; Pouliquen, Dorothée; Cruypeninck, Yohann; Hannequin, Didier; Gérardin, Emmanuel

    2016-05-01

    Agnosia for mirrored stimuli is a rare clinical deficit. Only eight patients have been reported in the literature so far and little is known about the neural substrates of this agnosia. Using a previously developed experimental test designed to assess this agnosia, namely the Mirror and Orientation Agnosia Test (MOAT), as well as voxel-lesion symptom mapping (VLSM), we tested the hypothesis that focal brain-injured patients with right parietal damage would be impaired in the discrimination between the canonical view of a visual object and its mirrored and rotated images. Thirty-four consecutively recruited patients with a stroke involving the right or left parietal lobe have been included: twenty patients (59%) had a deficit on at least one of the six conditions of the MOAT, fourteen patients (41%) had a deficit on the mirror condition, twelve patients (35%) had a deficit on at least one the four rotated conditions and one had a truly selective agnosia for mirrored stimuli. A lesion analysis showed that discrimination of mirrored stimuli was correlated to the mesial part of the posterior superior temporal gyrus and the lateral part of the inferior parietal lobule, while discrimination of rotated stimuli was correlated to the lateral part of the posterior superior temporal gyrus and the mesial part of the inferior parietal lobule, with only a small overlap between the two. These data suggest that the right visual 'dorsal' pathway is essential for accurate perception of mirrored and rotated stimuli, with a selective cognitive process and anatomical network underlying our ability to discriminate between mirrored images, different from the process of discriminating between rotated images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Coordinates of Human Visual and Inertial Heading Perception.

    PubMed

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.

  20. Coordinates of Human Visual and Inertial Heading Perception

    PubMed Central

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865